So several times over the past two months I head over to Google.com to do my googling. All is going well until suddenly Google decides that I am a bot and throws a captcha screen in front of me instead of my search results. From then on it's no Google for me no matter how many captchas I enter and no matter which browser I use. I've pretty much just had to do my search via a different search engine, and wait it out until Master Google decides that I am worthy of its service again.
Yesterday it happened again and it occurred to me that I was running Fiddler. After turning Fiddler off and waiting a bit I was able to use Google again.
Has anyone else experienced this?
I hope this is the right place, i haven't found a place to create a support ticket for fiddler...
I am having a problem with Fiddler following a 307 redirect to an URL that needs authentication. Specifically, I let Fiddler issue an OPTIONS request to https://myserver/foo. The server needs authentication, so a challenge and response is performed and Fiddler repeats the intial request with an Authorization header. So far so good. Now the server actually cares for Fiddler's request for the first time and responds with a 307 status code, redirecting to https://myserver/foo/ (note the trailing slash). Fiddler now repeats the last request for the new URL, but with the Authorization header not being stripped.
The problems with this are twofold:
Let me show you an example. The following 4 requests comprise:
Initial anonymous request:
OPTIONS https://myserver/foo HTTP/1.1
User-Agent: Fiddler
Content-Length: 0
Response:
HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: NTLM
Initiation of the handshake:
OPTIONS https://myserver/foo HTTP/1.1
User-Agent: Fiddler
Content-Length: 0
Authorization: NTLM (...)
Response:
HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: NTLM (...)
Answer to the server's challenge:
OPTIONS https://myserver/foo HTTP/1.1
User-Agent: Fiddler
Content-Length: 0
Authorization: NTLM (...)
Response:
HTTP/1.1 307 Temporary Redirect
Content-Length: 0
Location: https://myserver/foo/
Request to the new URL:
OPTIONS https://myserver/foo/ HTTP/1.1
User-Agent: Fiddler
Content-Length: 0
Authorization: NTLM (...)
Response:
HTTP/1.1 400 Bad Request
Content-Length: 0
When i repeat the last request without the Authorization header, everything works fine, i.e. challenge and response are repeated and ultimately, the desired 204 response with the Allow header is returned.
In my opinion, Fiddler should never retain an Authorization header following a redirect! Since to my knowledge there is no definite rule that flat out fordbids that behaviour, calling this a bug might be exaggerated. But in order to play nicely with various web services and also with security in consideration, i think this issue should be addressed.
Thank you.
are you able to capture traffic from your phone without having to be on the same wifi?
i have an ethernet cable with extremely fast internet but my wifi is very slow compared to it.
If I grab the little target icon and drag it onto an Edge window/tab in the UI it always shows the same process id (the main MicrosoftEdge.exe process) and nothing is ever captured. Instead I have to manually go to the filtering tab, use the "Show only traffic from" drop down and figure out which MicrosoftEdgeCP.exe instance is the tab I'm interested in and filter it there.
I'm not sure what all you'll be able to do about this short of detecting it's MicrosoftEdge.exe and doing something more specific, but I thought I'd raise the issue to make sure you're aware of it.
Not sure if these forums are active, but I tried the sequentially crawl URL script, however it dumps all the sequential requests at once as fast as the processor can handle it. Causing hundreds of requests every few seconds. How do I tell it to wait 1 second between requests?
I've been trying to get this working with Thread.Sleep and Timers for the last 12 hours and I'm almost ready to throw my computer out the window.
Here is the example script:
public static ToolsAction("Crawl Sequential URLs")
function doCrawl(){
var sBase: String;
var sInt: String;
sBase = FiddlerObject.prompt("Enter base URL with ## in place of the start integer", "http://www.example.com/img##.jpg");
sInt = FiddlerObject.prompt("Start At", "1");
var iFirst = int.Parse(sInt);
sInt = FiddlerObject.prompt("End At", "12");
var iLast = int.Parse(sInt);
for (var x=iFirst; x<=iLast; x++)
{
//Replace 's' with your HTTP Request. Note: \ is a special character in JScript
// If you want to represent a backslash in a string constant, double it like \\
var s = "GET " + sBase.Replace("##", x.ToString()) + " HTTP/1.0\r\n\r\n";
var b=false;
while(!b){
try{
FiddlerObject.utilIssueRequest(s);
b=true;
}
catch(e){
var iT = Environment.TickCount + 10000;
FiddlerObject.StatusText = "Waiting 10 sec because we have too many requests outstanding...";
while (iT > Environment.TickCount){ Application.DoEvents(); }
}
}
}
}
Hello,
Using C# I want to programmatically export a fiddler session as a webtest. I understand this is not possible using FiddlerCore because it does not supports extensions (which is the mechanism to export the session as a web test through the UI).
Hence I have removed my dependency on FiddlerCore and taking one on Fiddler.exe to be able to use the Fiddler.WebTest namespace (from VSWebTestExport.dll).
However, now my existing code to capture the session (based on Fiddler Core) is not working as FiddlerApplication.StartUp() does no exist on the Fiddler assembly.
How can I programmatically start and capture a session with a dependency on Fiddler.exe? I hope this will allow me to use the WebTest export class.
Thanks,
Mau
Hello,
I use the currently latest version of Fiddler, 4.6.0.2, on Windows 7 64 bit.
If I load a saved archive and run a filter action on it - it works fine.
But any following filter actions, even disabling the filter and ru-running/applying the change - doesn't work and the list is "stuck" with the results of the first filter.
I have to load again the archive a run each filter one at a time, one for each archive loading.
Is this a known issue?
Thanks,
Eitan