Is it possible to distinguish between unwanted GET requests with path-names but without website URL and normal/legitimate GET requests containing Full URL names?
Reason for this inquiry is that I see a behavior pattern in my HAProxy logs the starts with GET requests without a URL name, followed flooding of GET requests probing(guessing) all kinds of path names related to apps like WP / Wiki / PHPmyAdmin, Etc…
Many times, this behavior is repeated in a coordinated manner from numerous different IP addresses almost as if the first IP informs the others to join the scan(probing).
My goal is as follows:
REJECT “GET /pw/Main_Page HTTP/1.1”
ACCEPT “GET https://mysite.com/pw/Main_Page HTTP/1.1”
Is this something feasible with HAProxy ACLs?
P.S.
I am not worried about googlebot or other crawlers not being able to index my site.
Thank you.