libcurl experience and question
Well, I've written my first "big" (relatively speaking) program with libcurl - it sends a request to a classified ad site (CL) looking for old Mustangs for sale. It works pretty good. I wrote it in C. It's about 13 screens big in my IDE (Xcode).
My first test was to hardcode five different cities, loop through them, and pass a URL to search for "mustang". This takes on average about 15-20 seconds depending on network delays. Each time it gets a whole page, it sucks out of the ad data, then parses each ad, validating it's likelihood of actually being a mustang ad, then gets the positional price info, it is exists, and then determines the year. This is kinda tricky because the year might be 1967, '67, 67, and so on. So I look for the best fit first, then get looser in my tests. No regexs, just blunt force parsing in C - the old "divide and conquer" approach.
The output is then sorted from lowest to highest price and I create an XML file, using XLINK for the hyperlinks to show the detail ad. I wasted about 5 hours trying to get the hyperlinks to work in Safari and IE 6, then tried Firefox and they worked like a charm.
I used the curl_easy_... functions, but I'm going to write another version that uses the curl_multi interface, and with that version, also search every city in the country. Kinda neat! libcurl sure makes it easy.
I do have a question that I'm sure I could dig out of researching, but figured I would ask here. (I'm obviously too lazy to manually search for a mustang... )
One of the sites always failed to respond, so my program just sat there, waiting synchronously, and after a few minutes I killed it. Is there timeout value I can specify for curl_easy_perform to not just sit there and wait?