-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Overview: A simple Python based CLI tool for google dorking!
Features:
- Added Automation for updated proxies, thanks to TheSpeedX
- Added Cache proxy method, time interval = 1day
- Added By default cookies, to avoid barriers while searching!
- Added List of default "Google Dorking" queries, up to 70+
- Added List of "User Agents" 890+
- It's in BETA Version, Use at your Own risk : Still Upgrading...
-
Python: The core programming language used to write the script.
-
Requests: A popular Python library for making HTTP requests. It is used to send requests to the Google search engine to retrieve search results.
-
Beautiful Soup: A Python library for parsing HTML and XML documents. It is used to parse the HTML content of the search results retrieved from Google.
-
contextlib: A standard Python library module that provides utilities for working with context managers. Although it's not explicitly used in the provided code snippet, it's imported.
-
argparse: A standard Python library module for parsing command-line arguments. It allows for parsing command-line arguments passed to the script, although it's not used in the provided code snippet.
-
sys: A standard Python library module providing access to some variables used or maintained by the Python interpreter and to functions that interact strongly with the interpreter. It's imported but not explicitly used in the provided code snippet.
-
time: A standard Python library module providing various time-related functions. It's not explicitly used in the provided code snippet.
-
datetime: A standard Python library module providing classes for manipulating dates and times. It's used to handle timestamps in the script.
-
base64: A standard Python library module providing functions for encoding and decoding data in Base64 format. Although it's imported, it's not explicitly used in the provided code snippet.
-
json: A standard Python library module for encoding and decoding JSON data. It's used to work with JSON data, although it's not explicitly used in the provided code snippet.
-
re: A standard Python library module providing support for regular expressions. Although it's imported, it's not explicitly used in the provided code snippet.
-
os: A standard Python library module providing a portable way of using operating system-dependent functionality. Although it's imported, it's not explicitly used in the provided code snippet.
Usage:
If you haven't Install python and python-pip install it by given below commands
sudo apt update && apt upgrade -y; sudo apt install python3 python3-pip
OR
Termux Users
pkg update && pkg upgrade -y; pkg i git python3 -y;
For Both PC and Termux users [LINUX ONLY]
cd ~/; git clone --depth=1 https://github.com/mr0erek/Gdorker; cd Gdorker; pip install -r requirements.txtTo start use command below
python Gdorker.pyOR
python3 Gdorker.pyExamples:
python Gdorker.py --site xyzwebsite.xyz
Future Improvements:
I'm working on it....
Contributing:
As it is Open Source feel free to contribute!
Description:
This module performs a Google search using the provided search query and retrieves the search results. It then parses the HTML content of the search results to extract titles, links, and snippets for each search result entry.
Dependencies:
- requests: For making HTTP requests to the Google search engine.
- BeautifulSoup: For parsing HTML content.
Methods/Functions:
-
fetch_url(url): Retrieves Google search results for the given search query.
-
Parameters:
- --site : <your_website>
-
Returns:
- List of dictionaries, each containing the title, link, and snippet of a search result entry.
Usage Example:
python Gdocker.py --site xyz_website.xyz
Error Handling:
If the request to Google fails (e.g., due to network issues or rate limiting), an error message will be printed, and an empty list will be returned. If any required elements (title, link, snippet) are not found in a search result entry, default values will be used instead.
Performance Considerations:
The module uses efficient HTTP requests and HTML parsing techniques to minimize latency and resource usage.
Testing:
Testing was conducted using unit tests to ensure the functionality of the fetch_url function. Sample search queries were used to verify that the module returns accurate search results.
References:
- Python Requests library documentation: https://docs.python-requests.org/en/latest/
- BeautifulSoup documentation: https://www.crummy.com/software/BeautifulSoup/bs4/doc/