Context:
I'm using this to scrape a page for events and it mostly works. It gets content and adds everything to my site's page using the following code (which I included after that library's reference in the markup):
Code:
            baseUrl = "http://www.whatever.com";
            var payload = function(){
               $.ajax({
                  url: baseUrl,
                  type: "get",
                  dataType: "",
                  success: function(data) {
                  $foop = $('<form>' + data.responseText + '</form>');

                  $.each($foop.find('.view-upcoming-events-homepage li'), function(idx, item) {<-- For each li from the payload...
                     $.each($(item).find('a'), function(){<-- For each link found...
                        $(this).attr("href",'http://www.whatever.com'+$(this).attr("href"));<-- Modify the original hash values to avoid relative locations once the links are harvested.
                     });
                     event = $(item).html();
                     $('<div class="event">'+event+'</div>').appendTo($('#block-block-5 .content'));<-- Create our site's event divs and add the event content to them.
                  });
                 
                     document.close();
                  },
                  error: function(status) {
                     console.log(status);
                  }
               });
            }
            payload();

            setTimeout(function(){
                var seen = {};
                $('#block-block-5 .mini-cal').each(function(idx, item){<-- Supposed to be applied to only the divs that get generated from the resulting payload above. So if this script is processed BEFORE downloading of content is complete, how does it get handled?
                    var txt = $(this).text();
                    if(seen[txt])
                        $(this).stop().removeAttr('style').animate({opacity:0},2000);<-- Events are designated by their day numbers. If you have duplicate entries for, say, the 24th, why not hide the unnecessary 24s and simply keep 1 for the heading? Makes it look nicer and cleaner.
                    else
                        seen[txt] = true;

                });
            }, 2500);
Sometimes this works, and sometimes not. I suspect that it could be a problem either with how long the content takes to be scraped and downloaded, or else, it has something to do with that setTimeout I added--which I did in an attempt to fix the prior.

Long story short, I'm not sure if everything in that setTimeout call is correct nor am I certain that I'm stocking the elements in the first section correctly. Insight is appreciated.