Metadata-Version: 2.1
Name: RequestSoup
Version: 1.0.2
Summary: A wrapper created to make using requests and BeautifulSoup in conjunction easier
Home-page: https://github.com/ArnavChawla/RequestSoup
Author: Arnav Chawla
Author-email: arnavchawla23@gmail.com
License: UNKNOWN
Description-Content-Type: text/markdown
Platform: UNKNOWN
Classifier: License :: OSI Approved :: MIT License
Requires-Dist: requests
Requires-Dist: bs4

# Welcome to RequestSoup!

RequestSoup was created with the goal to make Python web-scraping easier. The package interfaces both requests and BeautifulSoup, making them easier to use in conjunction with one another.

# Installation
The installation process for RequestSoup is pretty easy! Just enter the following command in your terminal:

`pip install RequestSoup`

# Usage
Usage of the package is almost identical to the indiviual use of requests and BeautifulSoup. Here is an example:

    import RequestSoup as scraper

    r = scraper.get("https://google.com")
    # The Variable r is a Request Response Object

	elements = scraper.findAll("a",{"href":"/"})
	#elements will now be a list of all "a" elements on the page with an herf that points to "/"
For more detailed usage check out: https://requestsoup.readthedocs.io/en/latest/

# Sessions
The package also features the ability to create Request Sessions. An example of this is provided here:

    import RequestSoup as scraper

	session = scraper.Session()
	session.get("https://google.com")
	#The session object makes it much easier to work with and send multiple requests to the same website
# Feedback
This is my first public python package and I would appreciate any user feedback. For feature requests or changes, feel free to create an issue or pull request on the GitHub repo. 


