Thursday, October 14, 2010

C++ turns 25

Just wanted to share a quick link - Bjarne Stroustrup’s reflections on the 25th anniversary of C++’s first release:

Friday, October 8, 2010

A Generic Function to enhance BPT components

About 2 years ago, we created a set of QTP function libraries to drive the automation effort using Quality Center's Business Process Testing (BPT) functionality. What those libraries provide are a set of generic functions from clicking on a UI object to verifying an object's property, which then can be used to create both generic BPT components that can be used on any application and specific components that target a particular application or a screen within an application. We have used those libraries and components to successfully automate our application testing without much modification and maintenance.

Recently, we were using the same set of components to create/update the test cases for a new application and I took this opportunity to enhance the libraries by adding a few functions. One of them is a generic Eval function that provides some more flexibility in creating components. As we know, VBScript has its own Eval function that evaluates the provided expression and returns the result. So to be able to use that functionality from within the components, I created a new function that returns the result of that expression or the string itself if it is not an expression. Here's the function listing:

' Function EvalFunction
' ------------------
' Evals the function call specified in the parameter and returns the result
' Parameter: expr - Any VBScript expression
'@Description Evals the function call specified in the parameter and returns the result
'@Documentation Evals the function call specified in the parameter and returns the result
Public Function EvalFunction(ByVal expr)
  Dim res
  On Error Resume Next
  If (expr <> "") Then
    res = Eval(expr)
    If (Err <> 0)Then
      res = -1
    Elseif res = "" Then
      res = expr
    End If
  End If
  EvalFunction = res
  On Error Goto 0
End Function

The function itself is quite simple but it provides a lot of added functionality. I can use this function in my BPT components to evaluate any expression during run-time. And in addition, I also create a generic component that calls this function and returns the result. When I need to create a test case that needs a run-time value, I can use this component within my test cases to have that functionality.

Using the EvalFunction function in components:

For a part of our application's functionality, we had a component ("AddIP") that adds an IP to a user. Since there were 4 text fields for each of the octets, we had 4 component input parameters for each octet. In the picture below of AddIP component, each octet is a different text field (“IP1”…”IP4”) requiring a different input parameter (BrowserWebEditSet is another generic function to enter value in a WebEdit object)


But while revisiting the testing, I felt that its much easier to provide the IP parameter as a whole when you're creating a large test case where multiple IPs need to be added. So instead of having to create a new function, I just used the EvalFunction to create another component ("AddIPv2") that takes the full IP address and splits it into each octet and then enters the values as required in different text boxes. I use the VBScript “split” function to split the input parameter (the IP address) into each octet like this: "Split(""" & Parameter("IP") & """,""."",-1,vbTextCompare)(0)" which returns the 1st octet. As in the picture of the component below, I call the function 4 times for each octet which returns the result in a local parameter which I can then use to set for each text field.


So using this enhanced component, the user has to provide only the whole IP address “” instead of each octet in a separate parameter.

Using the EvalFunction component in BPT tests:

The EvalFunction component is a one operation component that calls the EvalFunction and returns the result in a component output parameter. This component can be used in any test to evaluate any expression and use the result in a subsequent component.

EvalFunction Component

For example, a lot of our test cases require creating a new user. Instead of specifying a fixed value for the user ID or having the tester enter a value in a run-time parameter before every run of the test, the user ID can be generated using a timestamp which guarantees its uniqueness. This is done by using the EvalFunction component. In the BPT test case below, the EvalFunction component is used with input parameter a string prefix concatenated with the current timestamp (DateFormatter is another function that returns the current date/time in appropriate format). It returns the result in a component parameter “result” which is then later used in AddUser component to create a user with that user ID. 

EvalFunction Component 2

This way, this test can be run any number of times without requiring any modification.


The function that I showed in this post is a part of the automation libraries that we created. Over time, as we use them for testing multiple applications, we have enhanced the libraries even more. These libraries, along with the BPT functionality provided by Quality Center, have helped us create reusable components that require very less maintenance. And these components are being used by testers to create and run automated test cases for different applications and different functionalities.

Wednesday, September 29, 2010

HP Quality Center (ALM 11.0) and REST

We are currently using Quality Center 9.2 and one of my goals for this year is to upgrade it to the latest version. While browsing through the What’s New documentation for version 11.0 (seems like it has been rebranded to Application Lifecycle Management, or ALM), I saw this line:

There are now ALM REST resources available. For details, see the HP ALM REST API Reference.

That to me is something that stood out from all the other features. Of course, from 9.2 there have been other great features added as well, like version control and flows in version 10 and Sprinter and others in 11. But being able to write clients using the REST API is a great feature and something that I definitely will be exploring once we have this version installed. I went through their REST API reference and as expected, it exposes all the entities in a RESTful way that would make it easier to write the clients in Java or any other language.

Related: a previous REST-related post

Friday, July 2, 2010

Intercepting SSL traffic using WebScarab

The last time I wrote about intercepting web requests using WebScarab, I was successful in intercepting SSL traffic generated through a custom Java client. Even though the process to do that was quite tedious – involving exporting the WebScarab server certificates into .cer format, importing the certificate into a Java keystore and then running WebScarab as a reverse proxy – I was able to intercept and view the SSL traffic that was being generated. But there was an inherent issue with that process that I overlooked.

When a proxy is setup to intercept SSL traffic, the security issue is that the SSL certificate that is presented by the proxy is not signed by a trusted authority. Web browsers detect this and give the user an option to accept or not accept this risk. So there is no problem in using the proxy to intercept web traffic to secure sites and we can just point the browser to the proxy and accept when warned about certificate error. But in case of Java clients using JSSE, there is no assumption of an interactive user session and so by default it throws an exception if there are any certificate related issues – be it an unknown certificate in which case it throws the exception: PKIX path building failed: unable to find valid certification path to requested target

or a hostname mismatch: No name matching … found

The latter is thrown because when a new HTTPS connection is created using HttpsURLConnection class, it implements a default HostnameVerifier interface which checks if the host we’re trying to connect to matches the name in the certificate in its certificate store (specifically the cn within the certificate). If it doesn’t, it throws the above exception. The client I was using earlier overrode the default HostnameVerifier with a custom one, which ignored the hostname mismatch. But this time with a new client for a different application, it didn’t and I had to go one extra step to intercept the requests, which is detailed below. So first:

  1. Start WebScarab and run it as a reverse proxy on port 443. This is so that WebScarab behaves as a secure server to the client, even if with a self-signed certificate instead of one signed by a trusted authority. (If running WebScarab from the same machine that is generating the requests, we should also select “Intercept requests” check box. This is important because in that case, the proxy is an infinite loop to its own interface and so we want to be able to break the flow and Abort after the first intercept)
  2. Modify the hosts file to point WebScarab hostname to the IP of the machine where it is running. In case of local, it should be:    WebScarab
    This is specifically so we can get around the issue of hostname mismatch because we’ll try to connect to host “WebScarab” instead of actual target server. If the client overrides the default HostnameVerifier to ignore those errors, it can be setup so that the client points to the actual host:
    <ip where WebScarab is running>    <target hostname>
  3. Use the java program available here to create a keystore with the WebScarab certificate
    >>java InstallCert WebScarab
    Since WebScarab hostname is pointing to the WebScarab proxy, this program will connect to it and retrieve its certificate. It will create a keystore file called jssecacerts with WebScarab’s certificate (keystore password is blank by default).
  4. Configure the client to use WebScarab as the host within the URL. So instead of https://<hostname>/<path>, it should be https://WebScarab/<path>.
  5. Run the java client with the truststore and password properties:<location to jssecacerts file><password, default blank>

At this point, WebScarab proxy should intercept the request. I can review it, and abort it so it doesn’t repeat. Obviously, the request can’t be sent to the actual destination server. As I’ve noted above and as far as I know, there’s no way to get around the hostname mismatch error unless the default HostnameVerifier is overridden. But in my case, I was fine with just intercepting the request and creating my LoadRunner scripts using the raw HTTP request.

Friday, May 21, 2010

Goodbye multiple putty windows!

If you use PuTTY for your remote telnet or SSH needs, chances are you have had a situation where you had to open and manage PuTTY windows. I used to do that and it was sometimes confusing. But recently, just out of luck (my laptop had crashed and I was looking for ways to export/import PuTTY connection profiles), I found this: PuTTY Connection Manager. Its a tabbed version of PuTTY client and provides a solution for managing multiple PuTTY instances. I’ve tried and it works great!

Since I’m endorsing some good tools that I use everyday, here’s others:

- Agent Ransack: A file searching utility for Windows, it has great features for searching by regular expression, within files etc. I haven’t used Windows built-in Search since I found this.

- SketchPath: is an XPath tool that you can use to view XMLs and run XPath queries.

- Sysinternals: a collection of utilities to perform various troubleshooting tasks in Windows. It provides a lot of helpful utilities like Process Explorer (an advanced version of Task Manager), tcpview (shows all open connections with their processes) etc.

- WinMD5: an MD5 utility for Windows.

That’s it for now. I’ll update this post if I can remember more tools to recommend.

Saturday, May 15, 2010

LoadRunner Scripting Challenge – (AJAX, JSON, REST & XML)

If you are not familiar with, it is an online photo publishing and sharing site offered by Kodak. It offers features to store and share your photos online and to print them or order certain photo products, just as competing sites like flickr, Snapfish, Picasa etc. What they don’t offer is an API to interact with your account online (uploading/downloading pictures etc) without having to go through their website. Flickr (the site I use) has a published API that can be used.


Last year, KodakGallery changed their storage policy so that users would have to make a minimum purchase from the site based on their storage size in order to continue storing pictures. Even though the cost is minimal for the storage they offer, it was still considered worthwhile to explore other options. And the direct impact to me was that I was tasked with downloading all the pictures that have been stored there for last few years. Manual download was out of the question because of the number of pictures that had accumulated over that duration. And scripting it with Perl or LoadRunner (my weapon of choice) was very viable because of my experience with finding myself in these kind of situations and the thrill of being faced with a challenge and learning something new out of it.

Note: If you navigated here looking for an automated way to download all your pictures from, I’m planning to create a Perl script to automate that. You can safely ignore the rest of the post and write me a comment, which will provide me with the motivation to stay up extra hours in the night.


So here’s what we needed to do: Build a script that logs in to website and downloads all stored pictures to the local disk categorized by their album folders.

If you’re a LoadRunner user who has to use it in your daily life (atleast while at work) and want to give it a shot, please do so before going through the post. You’ll need a KodakGallery account and a few albums with pictures in them. I promise that it’ll be fun and challenging. I also have to state that scripting/automating interactions with a website has to be done with some caution. You may be using the website’s resources in a way they were not intended to be used. Sean Burke, author of Perl & LWP puts it in a very succinct and precise manner here:

…the data on the Web was put there with the assumption (sometimes implicit, sometimes explicit) that it would be looked at directly in a browser. When you write an LWP program that downloads that data, you are working against that assumption. The trick is to do this in as considerate a way as possible.


With that in mind, let’s get to it. Scripting a live website comes with its own unknowns. The fact that you have no idea about the underlying technologies used to transfer the data and present it to the user provides a great opportunity to learn not only about some new technologies but also about the tool that you use because you may have to use the tool (or some of its features) in ways that you’ve never done before.

The first step in LoadRunner scripting of course is to record the user interaction with the website. For the record, I only have availability to LoadRunner 8.0 that I’m using for scripting here. I’ve heard that newer versions have better support for new web technologies that have come up in recent years. But in last few years at my current role, there’s never been a time that I wasn’t able to deliver a script because of using an older version and I have never felt that I’m missing something.

I created a script using my preferred recording options:

Recording Mode: HTML-based script containing explicit URLs only
Not using correlation
Not recording headers
Not excluding content types
Do not record this content types as a resource:
Record non-HTML elements in current HTML function

I put a comment in the script before every action so that when editing the script later on, I know clearly where each action starts and ends. But after recording, the script in this case was not very intuitive. Some of the actions didn’t correspond to my script comments. For example, in the script where I put the comment for login, I didn’t see any web request that would match a login request or any form-based submission with login parameters. Instead, there was a web_url request to “storagestatus.jsp”. When I put a comment for clicking on an album, there were no steps.

So after scanning through the recorded script and the recording log, I realized that the login and other actions were being submitted through JavaScript and the content-type for these requests was non-HTML. My current recording settings specified that the requests that do not have a content-type of “text/html” or “text/xml” are not to be recorded as a resource, and so those were considered a part of current step and included after the EXTRARES parameter or the original request. Here’s the initial request. The login request is included in the original step (I later found that it has the Content-Type=text/javascript) and the authentication itself is handled through submitting the username and password in an HTTP cookie called “ssoCookies”:

... //lots of images

So with that information, I saved the current script and recorded another script with different recording options. I used URL-based script that records all the content (including non-HTML, like css, js, gif etc) in a separate web_url function. That meant the script was longer and a little harder to navigate through but atleast I could scan through the script and figure out what requests are being submitted by their corresponding actions.

Having recorded the actions (login, navigate to an album, download a full-resolution image etc), scanning through the script multiple times, using WebScarab (related post) to scan through the HTTP traffic, I found a lot of interesting things:

  1. Login is handled through JavaScript. A lot of JavaScript. They use MooTools JavaScript libraries for a lot of functionality but for login, it creates the cookie “ssoCookies” and sends a request to “login.jsp:” which (after successful authentication) returns some user information (ssId: probably some sort of unique user identifier, firstname: firstname of the user) in a script (Content-Type: text/javascript) which executes another JavaScript function (“callSignInComplete”), which sends the user to “/gallery/storagestatus.jsp”, which then ultimately redirects (HTTP 302) to “/gallery/creativeapps/photoPicker/albums.jsp”.
    >>Here’s some interesting information on how this lazy JavaScript loading works:
  2. The site uses JavaScript and AJAX extensively to request resources and to present the response. For example, the album list (the images it uses to show the links to individual albums) and the pictures within the albums (once you click on an album) are retrieved asynchronously using XMLHttpRequest object.
  3. Complimentary to AJAX, it uses JSON to exchange the data. For example, the request for the list of albums returns a JSON response with the album list and the details. Here’s the HTTP request headers for the album list and as you can see (x-request and x-requested-with are custom HTTP headers used by the app):
    "GET /site/rest/v1.0/albumList HTTP/1.1\r\n"
    "Accept-Encoding: gzip, deflate\r\n"
    "Accept: application/json\r\n"
    "x-requested-with: XMLHttpRequest\r\n"

    And here’s a part of json response:


  4. It implements the services using REST architecture. Representational State Transfer (REST) is an architectural style to expose services on the web and you can read more about it online…but to a LoadRunner scripter, XML services exposed RESTfully are no different than any other XML based service over HTTP. Some more information about REST:
  5. It includes some tracing cookies and requests to third-party sites that can be safely ignored and commented out.

So with all that information, it was fairly easy to visualize the script’s high-level steps:

  1. Navigate to home page & login.
  2. Send a REST-style request to get the list of Albums (name, URI etc)
  3. For each Album in the list
    1. Get the name of the Album and create a corresponding folder on the local disk
    2. Send a request to get the list of all photos in the album
    3. For each photo
      1. Send a request to download the image file
      2. save the image file in the album folder
  4. Logout
Step 1:

The 1st step is to navigate to the “welcome.jsp” page which returns the session cookies to be used throughout the session. I deleted all the extra requests in the script for images, css files etc. Next step is to login and to do that, we need to send a request to login.jsp with a random 9-digit number and also the username and password in the “ssoCookies” cookie. All other session cookies are automatically handled by LR.




Step 2:

Once login is successful, we need to get a list of albums that the user has. This is done by sending a GET request to this REST-style URL: “” directly. We’ll also need to save the response body in a parameter that we’ll have to parse to get the album details.

web_reg_save_param("albumList", "LB=", "RB=", "Search=Body", LAST);

Now here’s the beauty of how the service has been implemented. When you visit the website through a browser, the actual response to this request is returned in JSON formatted string. But we can actually send the request in such a way that it returns the album list in XML rather than JSON. All we have to do is to not include the “Accept: application/json” header, instead just send “Accept: */*”. And since it’s easier to use LR’s built-in XML functions to parse XML strings, we do exactly that. LR’s web_url() function sends “Accept: */*” by default so we get an XML response with the album list.

Step 3:

So once we have the Album List in XML, I use lr_xml_get_values() to get the id of all the albums in the list.

numAlbums = lr_xml_get_values("Xml={albumList}", "Query=/AlbumList/Album/id","SelectAll=yes", "ValueParam=albumId", LAST); 

It returns the number of matches that match the XPath Query, which is the number of albums the user has. This parameter “albumId” holds all these ids and will be used to get the list of all the photos in an album.

Step 3.1:

Now for each of these albums, we get the id from the parameter “albumId” and then get the name of the album using lr_xml_get_values again and using the id in the XPath. Then we go ahead and create the directory as specified.

sprintf(sfx,"{albumId_%d}", j);
lr_save_string(lr_eval_string(sfx), "aid");

lr_xml_get_values("Xml={albumList}", "Query=/AlbumList/Album[id='{aid}']/name","ValueParam=albumName","NotFound=Continue",LAST);

if (mkdir(dname)) { //works, but need better error handling
lr_output_message("Create directory %s failed", dname);

return -1; }

Step 3.2:

With the album Id, we send another GET request to the URL: “{aid}”. Just like we did above for the album list, we save the response body which is a list of all the photos in this album.

        //---get album details
web_reg_save_param("albumDetails", "LB=", "RB=", "Search=Body", LAST);

numPics = lr_xml_get_values("Xml={albumDetails}", "Query=/Album/pictures/photoUriFullResJpeg","SelectAll=yes", "ValueParam=fullResURI", LAST);

And again, just like we did above, we user lr_xml_get_values() to get the URIs to the full resolution picture. It returns the number of pictures and the URIs in a parameter.

Step 3.3:

Now for each of the pictures, we get the URI to full-resolution image from the parameter (lines 3-4 below). We need to get the filename which is returned in the “Content-Disposition” HTTP header (line 7) and also, we have to save the whole body (binary image data) in a parameter (line 9) that we can later use to store on the local disk.

   1:         //get all photos in the album
   2:         for (i=1;i<=numPics;i++){
   3:             sprintf(sfx,"{fullResURI_%d}", i);
   4:             lr_save_string(lr_eval_string(sfx), "uri");
   6:             //save the file name that's part of Content-Disposition header
   7:             web_reg_save_param("filename", "LB=Content-Disposition: attachment;filename=", "RB=\r\n", "Search=Headers", LAST);
   8:             //save the whole HTTP body of request
   9:             web_reg_save_param("body", "LB=", "RB=", "Search=Body", LAST);
  11:             web_url("FS",
  12:                 "URL={uri}",
  13:                 "TargetFrame=",
  14:                 "Resource=1",
  15:                 "RecContentType=image/jpeg",
  16:                 "Referer=",
  17:             LAST);
  19:             lr_eval_string_ext("{body}",strlen("{body}"), &buf, &prmLen, 0, 0, -1);

Then we send a GET request to the URI which returns us the full-resolution image. But since this is a dynamically generated response (“Transfer-Encoding: chunked”), the server doesn’t return the size of the file in the Content-Length HTTP header which we could have used to write the contents in a file. Instead, we have to use lr_eval_string_ext() (line 19) to save the value in a buffer and to get the length of the buffer.

Now we have everything to save the final file: name, size and its contents. We use standard C file handling functions for that and finally free the memory by using lr_eval_string_ext_free().
            sprintf(fname, "%s\\%s", dname, lr_eval_string("{filename}"));
if ((file = fopen(fname, "wb" )) == NULL) {
lr_output_message("Unable to create %s", fname);
return -1;
fwrite(buf,prmLen, 1, file);

The code then loops to save each image in each album locally.

Step 4:

The last step is to logout which is just another request to logout.jsp with a random number. It goes through similar steps as the login and finally redirects back to the home page.


This was a great challenge and if you actually made it this far, I hope you enjoyed reading it and trying it yourself. I got to learn a lot from this and I hope you do too. Specifically, I learned about REST, JSON and about JavaScript lazy loading as well as HTTP chunked transfer. I also learned a little bit more about looking into LoadRunner recording logs, saving the HTTP response in a file and the XML functions in LR.

As I noted earlier, if you went through all this just looking for an automated way to download your pictures from KodakGallery, please leave me a comment and I’ll work on a Perl script to automate that.

Wednesday, March 3, 2010

LoadRunner, Memory Violations, lr_eval_string_ext and Pointers (ANSI C Style)

I think it’ll be quite accurate to say that an average programmer like me is daunted when first faced with the concept of pointers and memory management in C. During the initial programming years (much of which inevitably had to be in C), I tried my best to avoid using pointers in my code. Whether it be using character arrays with pre-defined size (who cares if it takes much more memory than is needed) or some other “nifty” trick… I thought I could get away as long as I can compile and run just “this” program. But I had to face it during a networking class in school when I worked on a peer-to-peer file sharing project and one of my classmates convinced me to “do it right” and pay heed to the requirement of the program being able to work with other students’ code.

So after much head-scratching and soul-searching, I begrudgingly revisited the concepts and began (or so I thought) to grasp the idea of address spaces. l-values vs. r-values and pointers. It all seemed to make sense and fit in place like a jigsaw puzzle finally coming together. And not before long, with some encouragement and confidence building, I rewrote the program using pointers instead of pre-allocated char arrays and making it as compatible as possible. But if it was supposed to make me happy and content, it didn’t last long and whatever feeling of competence I felt was shattered to bits when I compiled the program for the first time and got compile time errors that didn’t make any sense at all. It was even worse and torturous when after a few cycles of modifying and debugging, it finally compiled beautifully (oh the elation…) and then the first time I ran it, it threw a slew of memory violation errors and crashed as beautifully as it compiled.

Years have passed since then…some of which were spent in writing code in other languages (it doesn’t have pointers? I’ll take 10). And some others in trying to come to terms with how even after understanding the concepts if I now have to write a C program and I decide to (or have to) use pointers, the error messages still baffle the heck out of me. Brief moments of competence have existed…when after writing, re-writing, debugging, debugging again…debugging a few more times, I was finally able to turn out a decent piece of code that could accomplish what it was supposed to in a relatively efficient manner.


So how does all this relate to the subject of this post? A few days ago, I was writing a LoadRunner script for a web application which had an inquiry page that submitted a request. Depending on the data submitted, the request returned either a response page with the final result or a set of intermediate questions that needed to be answered. After submitting the answers, it again returned either another set of questions or the final result. Once the answers were submitted the second time, it returned the final result page. The number of questions returned was not constant, however 8 questions was the maximum. So a part of scenario logic was this:

a. Submit the initial inquiry
b. If Question Set A is returned, determine the number of questions, construct an answer string and submit
c. If Question Set B is returned, determine the number of questions, construct an answer string and submit
d. Final response


Since the questions returned were in a select box, it was easy to find the left and right boundaries and use web_reg_save_param with "Ord=All". In this case, since the questions were in the form:

<SELECT NAME="Answer2" SIZE="5">
  <OPTION value="0"> ABC</OPTION><br>
  <OPTION value="1"> DEF</OPTION><br>
  <OPTION value="2"> MTG</OPTION><br>
  <OPTION value="3"> SVG</OPTION><br>
  <OPTION value="4">NONE</OPTION><br>

it will be:

web_reg_save_param ("suffix", "LB=<SELECT NAME=\"Answer", "RB=\" SIZE", 
"Ord=All","NOTFOUND=Warning", LAST);

However, it wasn't as easy as determining the number of matches from {suffix_count} and looping to create a custom answer string. Each question had a relative order from 1 – 8 like the one above (the numeral after “Answer”) and the answer string had to be constructed based on that. The challenge was that the number of questions was variable and the order didn’t always start with 1 and go 1, 2, 3…and so on. So if 4 questions were returned, they could be labeled Answer2, Answer3, Answer5, Answer6 and based on this, the answer string would be answer2=2&answer3=2&answer5=2&answer6=2 (it didn’t matter if the questions were answered correctly, so a constant 2 would do). If I submitted answer1=2… here, it would return an error saying “Invalid Response” or something like that.


So what I had to do was to save that suffix number in a temp variable and construct the answer string by concatenating it together. So, something like:

c = atoi(lr_eval_string("{suffix_count}"));
    strcat(answerString, "Answer");
    sprintf(sfx, "{suffix_%d}", i);
    strcat(answerString, lr_eval_string(sfx));
    strcat(answerString, "=2&");
  lr_save_string(answerString, "aString");

And then I would create a web_custom_request and submit it:


My first instinct, as I’ve mentioned earlier was to use character arrays for both answerString and sfx.

char answerString[256], sfx[10];

I figured that the chances of the string being longer than 256 chars was remote so I was safe. And it worked fine when I ran it in VuGen. But when I ran the load scenario, all the users belonging to this script’s group failed exactly after 4th or 5th iteration. The error was something I hadn’t seen before:

Action.c(162): Error (-17991): Failed to add item to mfifo data structure.

on the line with lr_eval_string. I searched online and came across this link ( which suggested using lr_eval_string_ext instead of lr_eval_string to free memory earlier. The help on lr_eval_string also mentions:

Note: lr_eval_string allocates memory internally. The memory is freed at the end of each iteration. If you evaluate a parameter or parameters in a loop, conserve memory by not using lr_eval_string . Instead, use lr_eval_string_ext and free the memory in each loop iteration with lr_eval_string_ext_free.

I changed the code to:

c = atoi(lr_eval_string("{suffix_count}"));
    strcat(answerString, "Answer");
    sprintf(sfx, "{suffix_%d}", i);
    lr_eval_string_ext(sfx,strlen(sfx), &sfx1, &prmLen, 0, 0, -1);
    strcat(answerString, sfx1);
    strcat(answerString, "=2&");
  lr_save_string(answerString, "aString");

but to my disappointment, it still threw an error when executing in Controller after 4th iteration. The good part was that the error message was familiar, the bad part was that it was a memory violation exception:

Action.c(168): Error: C interpreter run time error: Action.c (168): Error -- memory violation : Exception ACCESS_VIOLATION

For some reason, I felt that all the years of avoiding or trying to avoid using pointers obligated me to get to the root of the issue this time and fix it instead of coming up with a workaround. The actual issue however turned out to be something else and I’ll come back to it later. First order of the day…use char pointers instead of static arrays to manage the strings.

To start:

char *answerString; //instead of char answerString[256]
char sfx[10]; //this still can be an array

Next step was to figure out how much memory will I need to allocate based on the number of questions returned and then allocate it. This was done using malloc:

//we need AnswerX=2& times the number of questions, + 1 for '\0' 
if((answerString = (char *)malloc(c * 10 * sizeof(char) + 1)) == NULL){
  lr_output_message("Insufficient Memory!!");
  return -1;

Also, we have to initialize it because there may be some garbage in the allocated space that may hinder the proper functioning of strcat:

*answerString = '\0';

Now we have something very similar to a brand new char array, but only of the exact size that we need. Next, we create the string exactly as above, and I used lr_eval_string instead of lr_eval_string_ext because I honestly didn’t think that was the issue. After creating the string, I null-terminated it.

  strcat(answerString, "Answer"); 
  sprintf(sfx, "{suffix_%d}", i); 
  strcat(answerString, lr_eval_string(sfx)); 
  strcat(answerString, "=2&"); 
answerString[c * 10 * sizeof(char)] = '\0'; 
lr_save_string(answerString, "aString"); 

And the best part, after saving the string in a parameter, I free the associated memory and relish my guilt-free existence (at least in terms of this script). The script worked like a charm, not only through VuGen but multiple iterations through the scenario in Controller.

So what was the issue with using character array: the issue was not that LR agents were running out of memory because I had used 256 bytes when I actually only needed less than that. The issue was that I was not emptying the array before using it. I had declared it within the Action itself:

  int i,c;
  char answerString[256], sfx[10];

and I wrongly assumed that LoadRunner throws away variables from previous iteration and initializes brand new variables in every new iteration. Instead, what happened as in this case when I used strcat was that it was concatenating the new answer string from this iteration to whatever was left from the previous iteration. So after a few iterations, it ran out of the pre-allocated 256 bytes of space and threw the memory violation exception. I could’ve continued to use char array (and I’m glad I didn’t) by just re-initializing it in every iteration.

So lesson learnt. Hopefully all this helps somebody not make the same mistakes I made. I certainly won’t and I will also be less hesitant in using pointers. Even though I’m pretty sure this is not the last I’ve seen of memory violation exceptions, I can say that I’ll be ready to learn something new next time that happens.

By the way…that peer-to-peer file sharing application, I finally was able to compile it and make it work using a mix of pointers and character arrays. It worked great and I felt satisfied when I completed it. But of course when I was demonstrating it to the TA, it didn’t function as expected and I later found out that it was because I had forgotten to null-terminate a string.

Thursday, January 14, 2010

Parrot AR.Drone – iPhone controlled flying experience


A colleague of mine forwarded me this and I was immediately impressed: AR.Drone is a iPhone/iPod Touch controlled (via wi-fi) helicopter that you can not only just fly around but also play augmented reality games with.

Check out the video below and others on YouTube.

Wednesday, January 6, 2010

2 new blogs to follow

There are 2 new technology/technical blogs that I’m following.

- The Daily WTF ( contains some really funny real-life situations, some of them too ridiculous to not make you go wtf?

- Digital Inspiration ( informative posts about new and interesting technologies/applications etc.