Automation Errors: 800706B5, 80004005, 80010108 appear for internal SAP site scrape

derigible picture derigible · Aug 10, 2012 · Viewed 13.5k times · Source

I am writing a macro that will scrape my company's internal SAP site for vendor information. For several reasons I have to use VBA to do so. However, I cannot figure out why I keep getting these three errors when I attempt to scrape the page. Is it possible that this has something to do with the UAC integrity model? Or is there something wrong with my code? Is it possible for a webpage using http can be handled differently in internet explorer? I am able to go to any webpage, even other internal webpages, and can scrape each of those just fine. But when i attempt to scrape the SAP page, i get these errors. The error descriptions and when they occur are:

800706B5 - The interface is unknown (occurs when I place breakpoints before running the offending code)

80004005 - Unspecified error (occurs when I don't place any errors and just let the macro run)

80010108 - The Object invoked has disconnected from its clients. (I can't seem to get a consistent occurrence of this error, it seems to happen around the time that something in excel is so corrupted that no page will load and i have to reinstall excel)

I have absolutely no idea what is going on. The Integrity page didn't make much sense to me, and all the research I found on this talked about connecting to databases and using ADO and COM references. However I am doing everything through Internet Explorer. Here is my relevant code below:

Private Sub runTest_Click()
   ie.visible = True
   doScrape
End Sub
'The code to run the module
Private Sub doTest()
   Dim result As String
   result = PageScraper.scrapeSAPPage("<some num>")
End Sub

PageScraper Module

Public Function scrapeSAPPage(num As Long) As String
   'Predefined URL that appends num onto end to navigate to specific record in SAP
   Dim url As String: url = "<url here>" 
   Dim ie as InternetExplorer
   set ie = CreateObject("internetexplorer.application")
   Dim doc as HTMLDocument

   ie.navigate url 'Will always sucessfully open page, regardless of SAP or other
   'pauses the exection of the code until the webpage has loaded
   Do
     'Will always fail on next line when attempting SAP site with error
     If Not ie.Busy And ie.ReadyState = 4 Then 
        Application.Wait (Now + TimeValue("00:00:01"))
        If Not ie.Busy And ie.ReadyState = 4 Then
           Exit Do
        End If
     End If
     DoEvents
   Loop

   Set doc = ie.document 'After implementation of Tim Williams changes, breaks here
   'Scraping code here, not relevant

 End Function

I am using IE9 and Excel 2010 on a Windows 7 machine. Any help or insight you can provide would be greatly appreciated. Thank you.

Answer

Raymond picture Raymond · Dec 17, 2012

I do this type of scraping frequently and have found it very difficult to make IE automation work 100% reliably with errors like those you have found. As they are often timing issues it can be very frustrating to debug as they don't appear when you step through, only during live runs To minimize the errors I do the following:

Introduce more delays; ie.busy and ie.ReadyState don't necessarily give valid answers IMMEDIATELY after an ie.navigate, so introduce a short delay after ie.navigate. For things I'm loading 1 to 2 seconds normally but anything over 500ms seems to work.

Make sure IE is in a clean state by going ie.navigate "about:blank" before going to the target url.

After that you should have a valid IE object and you'll have to look at it to see what you've got inside. Generally I avoid trying to access the entire ie.document and instead use IE.document.all.tags("x") where 'x' is a suitable thing I'm looking for such as td or a.

However after all these improvements although they have increased my success rate I still have errors at random.

My real solution has been to abandon IE and instead do my work using xmlhttp.

If you are parsing out your data using text operations on the document then it will be a no-brainer to swap over. The xmlhttp object is MUCH more reliable. and you just get the "responsetext" to access the entire html of the document.

Here is a simplified version of what I'm using in production now for scraping, it's so reliable it runs overnight generating millions of rows without error.

Public Sub Main()

Dim obj As MSXML2.ServerXMLHTTP
Dim strData As String
Dim errCount As Integer

' create an xmlhttp object - you will need to reference to the MS XML HTTP library, any version will do
' but I'm using Microsoft XML, v6.0 (c:\windows\system32\msxml6.dll)
Set obj = New MSXML2.ServerXMLHTTP

' Get the url - I set the last param to Async=true so that it returns right away then lets me wait in
' code rather than trust it, but on an internal network "false" might be better for you.
obj.Open "GET", "http://www.google.com", True
obj.send ' this line actually does the HTTP GET

' Wait for a completion up to 10 seconds
errCount = 0
While obj.readyState < 4 And errCount < 10
    DoEvents
    obj.waitForResponse 1 ' this is an up-to-one-second delay
    errCount = errCount + 1
Wend

If obj.readyState = 4 Then ' I do these on two
    If obj.Status = 200 Then ' different lines to avoid certain error cases
        strData = obj.responseText
    End If
End If

obj.abort  ' in real code I use some on error resume next, so at this point it is possible I have a failed
           ' get and so best to abort it before I try again

Debug.Print strData

End Sub

Hope that helps.