Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: project structure changed #10

Open
wants to merge 34 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
6adbbe1
feat: remove old files
yokwejuste Aug 12, 2022
8d83493
feat: remove old files
yokwejuste Aug 12, 2022
716c07f
feat: remove old files
yokwejuste Aug 12, 2022
19cfb04
feat: remove old files
yokwejuste Aug 12, 2022
e448c38
feat: remove old files
yokwejuste Aug 12, 2022
80b0b39
feat: remove old files
yokwejuste Aug 12, 2022
623af39
del: remove old files
yokwejuste Aug 12, 2022
153f0ac
feat: initialized frontend
yokwejuste Aug 12, 2022
9fb5c37
feat: added requirements files
yokwejuste Aug 12, 2022
536cd03
feat: added model to database
yokwejuste Aug 12, 2022
a1ec497
feat: added a scraper to the project
yokwejuste Aug 12, 2022
ffbda5a
feat: flask backend server
yokwejuste Aug 12, 2022
331e9ea
feat: updated scraper by code optimization
yokwejuste Aug 12, 2022
88200ca
feat: model file name changed
yokwejuste Aug 12, 2022
f595e21
feat: added database to ignored files
yokwejuste Aug 13, 2022
c1fe72c
feat: updated the main frontend structure
yokwejuste Aug 13, 2022
4e86a4e
feat: more conditions and errors catch implemented
yokwejuste Aug 13, 2022
312fec0
feat: created db for Pharmacies
yokwejuste Aug 13, 2022
c75f1e2
feat: conversion of csv to sqlite
yokwejuste Aug 13, 2022
98ed502
feat: added sleep time for some objects
yokwejuste Aug 13, 2022
878149d
feat: update code conversion site
yokwejuste Aug 13, 2022
a9d22ce
feat: added csv files to be ignored
yokwejuste Aug 13, 2022
8775e88
feat: code optimization of the scraper
yokwejuste Aug 13, 2022
f3aba94
feat: code optimization of the scraper
yokwejuste Aug 13, 2022
7f32e2d
feat: added a license file
yokwejuste Aug 13, 2022
2df1331
feat: added project description
yokwejuste Aug 13, 2022
1bc29b9
feat: code optimization of the scraper
yokwejuste Aug 13, 2022
24c2ea8
feat: from send keys to clear
yokwejuste Aug 13, 2022
fcea340
feat: from send keys to clear
yokwejuste Aug 13, 2022
c3b1705
feat: scraper optimized
yokwejuste Aug 13, 2022
43bd469
feat: added replace to model
yokwejuste Aug 13, 2022
92e1828
feat: scraper optimized
yokwejuste Aug 13, 2022
7b1e8db
feat: scraper optimized
yokwejuste Aug 13, 2022
3c5fb9e
feat: scraper optimized
yokwejuste Aug 14, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -102,3 +102,13 @@ dist

# TernJS port file
.tern-port


# generated files from ide
*history*
*idea*
*vscode*
*.sqlite3
*.db
*.sqlite
*.csv
13 changes: 0 additions & 13 deletions Dockerfile

This file was deleted.

63 changes: 61 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,61 @@
# pharmacy
A list of Cameroonian pharmacies
## OSSCAMEROON PHARMACIES
This project is an [osscameroon](https://osscameroon.com) initiative. Using appropriate technologies, we provide all the
pharmacies on the Cameroon national territory.
## Features
* [Open Source](https://osscameroon.com/osscameroon/osscameroon.html)
* Promotes health and well-being of the Cameroonian citizens by providing the nearest pharmacies and health care services.

## Description
The **OSSCAMEROON PHARMACIES** is a web application that provides the nearest pharmacies and health care services in Cameroon.
Using [scraping](https://en.wikipedia.org/wiki/Web_scraping) techniques, we collect the data from the [Google Maps](https://www.google.com/maps) website.
- The backend being a [Python](https://www.python.org/) web application, we use [Flask](https://flask.palletsprojects.com/) to create the web application.
- A [sqlite3](https://docs.python.org/3/library/sqlite3.html) database is used to store the data.
- In addition to that a
frontend built with [React](https://reactjs.org/) and [TypeScript](https://www.typescriptlang.org/) is used to display the data.


To contribute, some bunch of steps has to be included. Follow me in this small tutorial.

## Contribution

- [Fork](https://github.com/osscameroon/pharmacies/fork) the project from [here](https://github.com/osscameroon/pharmacies/fork).
- Clone the repository as follows:

```bash
git clone https://github.com/<username>/pharmacies.git
```
- [Create a new branch](https://help.github.com/articles/creating-a-new-branch/) for different or several PRs.

```bash
git checkout -b <branch-name>
```

- [Commit and push](https://help.github.com/articles/using-git-commands/) your changes.


- Running backend server
```bash
cd backend
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
python server.py
```
- Running the frontend
```bash
yarn install
yarn start
```
- Running scraper
```bash
cd scraper
python scraper.py
```
Results will be available in the scraper folder in the file called `./backend/scraper/pharmacies.csv`.

## Contributors

- [Boris Mbarga](https://github.com/elhmn)
- [Steve Yonkeu](https://github.com/yokwejuste)

This project is opensource under the [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
44 changes: 44 additions & 0 deletions backend/model/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import pandas as pd
from flask import Flask
from flask_sqlalchemy import SQLAlchemy

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///pharmacies.sqlite3'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
SQLALCHEMY_TRACK_MODIFICATIONS = True

db = SQLAlchemy(app)


class Pharmacies(db.Model):
id = db.Column('pharmacy_id', db.Integer, primary_key=True)
name = db.Column(db.String(100))
location = db.Column(db.String(50))
contact = db.Column(db.String(200))
longitude = db.Column(db.String(10))
rating = db.Column(db.DECIMAL)
latitude = db.Column(db.String(10))


def __init__(self, name, location, contact, rating, latitude, longitude):
self.name = name
self.location = location
self.contact = contact
self.rating = rating
self.latitude = latitude
self.longitude = longitude


db.create_all()

db.session.add(
Pharmacies(
name='Steve', location='New York',
contact='123-456-7890', rating=5,
latitude='40.7128', longitude='74.0060'
)
)
db.session.commit()

pharmacies = pd.read_csv('../scraper/pharmacies.csv')
pharmacies.to_sql('pharmacies', con=db.engine, if_exists='replace', index=False)
10 changes: 10 additions & 0 deletions backend/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
flask==2.2.2
parsel==1.6.0
pip-chill==1.0.1
pyopenssl==22.0.0
pysocks==1.7.1
selenium==4.4.0
webdriver-manager==3.8.3
ipython==8.4.0
parsel==1.6.0
pip-chill==1.0.1
223 changes: 223 additions & 0 deletions backend/scraper/scraper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
import csv
from time import sleep

from IPython.core import error
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from parsel import Selector
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.remote.errorhandler import NoSuchElementException, InvalidSelectorException, \
ElementClickInterceptedException

driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.maximize_window()
query_array = ['yaounde']
# query_array = ['yaounde', 'douala', 'garoua', 'buea', 'bamenda', 'maroua', 'bertoua', 'ngaroundere', 'baffousam']
for cities in query_array:
driver.get('https://www.google.com/maps/')
search_input = driver.find_element(By.NAME, 'q')

search_input.send_keys(f'Pharmacies {cities}')
sleep(3)
search_input.send_keys(Keys.ENTER)

sleep(14)

query_results = driver.find_element(
By.CSS_SELECTOR,
'div.m6QErb.DxyBCb.kA9KIf.dS8AEf.ecceSd>div.m6QErb.DxyBCb.kA9KIf.dS8AEf.ecceSd'
)

vertical_ordinate = 100

fields = ['Name', 'Location', 'Contact', 'Rating', 'Latitude', 'Longitude', 'Images']

while True:
driver.execute_script(
"arguments[0].scrollTop = arguments[1]", query_results, vertical_ordinate)
vertical_ordinate += 100
sleep(1)
try:
driver.find_element(
By.XPATH, '/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]'
'/div[243]/div/p/span/span[1]'
)
print(
driver.find_element(
By.XPATH, '/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]'
'/div[243]/div/p/span/span[1]'
).text
)
break
except InvalidSelectorException:
pass
except NoSuchElementException:
pass

total_elements = driver.find_elements(
By.CSS_SELECTOR, 'div.lI9IFe>div.y7PRA>div>div>div>div.NrDZNb>div>span'
)

[print(f'{total_elements.index(item) + 1} ==> {item.text.capitalize()}') for item in total_elements]

pharmacies_locations = []
images_links = []
iterator = 3
new_iterator = 3
for item in range(len(total_elements)):
sel = Selector(text=driver.page_source)
try:
driver.find_element(
By.XPATH,
f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/div[{iterator}]/div/a'
).click()
except ElementClickInterceptedException:
driver.execute_script(
"arguments[0].click();", driver.find_element(
By.XPATH,
f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/div[{iterator}]/div/a'
)
)
sleep(16)
location = driver.find_element(
By.CSS_SELECTOR,
'div:nth-child(3)>button>div.AeaXub>div.rogA2c>div.Io6YTe.fontBodyMedium'
).text
print(f'{int((iterator - 3) / 2)} ==> {location}')
iterator = iterator + 2
pharmacies_locations.append(location)
try:
pharmacy_image = driver.find_element(
By.XPATH,
'//*[@id="QA0Szd"]/div/div/div[1]/div[3]/div/div[1]/div/div/div[2]/div[1]/div[1]/button/img'
).get_attribute('src')
images_links.append(pharmacy_image)
except NoSuchElementException:
try:
pharmacy_image = driver.find_element(
By.XPATH,
'//*[@id="QA0Szd"]/div/div/div[1]/div[3]/div/div[1]/div/div/div[2]/div[1]/div[1]/div/img'
).get_attribute('src')
images_links.append(pharmacy_image)
except NoSuchElementException:
pharmacy_image = 'No Image Found'
images_links.append(pharmacy_image)

with open(f'pharmacies-{cities}.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(fields)
driver.execute_script("window.open('','_blank')")
driver.switch_to.window(driver.window_handles[1])
driver.get("https://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/utils/geocoder")
sleep(6)
for code in pharmacies_locations:
code_converter_input = driver.find_element(By.CSS_SELECTOR, '#query-input')
code_converter_input.clear()
sleep(1)
code_converter_input.send_keys(code)
code_converter_input.send_keys(Keys.ENTER)
sleep(3)
try:
driver.find_element(By.CSS_SELECTOR, '#status-line>span.OK')
full_location = driver.find_element(
By.CSS_SELECTOR,
'#details-result-0>p.result-viewport'
).text.split()
[full_location.pop() for i in range(2)]
full_location.pop(0)
full_location = full_location[0]
full_location = full_location.split(',')
longitude = full_location[0]
latitude = full_location[1]
except NoSuchElementException:
new_converter_query = code.split()
new_converter_query.pop(1)
new_converter_query = new_converter_query[0] + ',' + new_converter_query[1]
code_converter_input = driver.find_element(By.CSS_SELECTOR, '#query-input')
code_converter_input.clear()
sleep(1)
code_converter_input.send_keys(new_converter_query)
code_converter_input.send_keys(Keys.ENTER)
sleep(3)
try:
full_location = driver.find_element(
By.CSS_SELECTOR,
'#details-result-0>p.result-viewport'
).text.split()
[full_location.pop() for i in range(2)]
full_location.pop(0)
full_location = full_location[0]
full_location = full_location.split(',')
longitude = full_location[0]
latitude = full_location[1]
except NoSuchElementException:
try:
full_location = driver.find_element(
By.CSS_SELECTOR, '#details-result-0>p.result-viewport'
).text.split()
[full_location.pop() for i in range(2)]
full_location.pop(0)
full_location = full_location[0]
full_location = full_location.split(',')
longitude = full_location[0]
latitude = full_location[1]
except NoSuchElementException:
location = 'unknown'
latitude = 'unknown'
except InvalidSelectorException:
location = 'unknown'
latitude = 'unknown'
code_converter_input.clear()
driver.switch_to.window(driver.window_handles[0])
name = driver.find_element(
By.XPATH,
f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/'
f'div[{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[1]/div/span'
).text
location = pharmacies_locations[pharmacies_locations.index(code)]
image = images_links[pharmacies_locations.index(code)]
latitude = latitude
longitude = longitude
try:
contact = driver.find_element(
By.XPATH, f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/div['
f'{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[4]/div[2]/span[2]/jsl/span[2]'
).text
except NoSuchElementException:
try:
contact = driver.find_element(
By.XPATH, f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/div['
f'{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[4]/div[2]/span/jsl/'
f'span[2]').text
except NoSuchElementException:
contact = 'No Contact Info'
try:
rating = driver.find_element(
By.XPATH, f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/'
f'div[{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[3]/div/span[2]/'
f'span[2]/span[1]'
).text
except InvalidSelectorException:
rating = driver.find_element(
By.XPATH, f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/'
f'div[{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[3]/div/span[2]/span[1]'
).text

except NoSuchElementException:
try:
rating = driver.find_element(
By.XPATH, f'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/'
f'div[{new_iterator}]/div/div[2]/div[2]/div[1]/div/div/div/div[3]/div/span[2]/span[1]'
).text
except NoSuchElementException:
rating = 'No reviews'
writer.writerow([name, location, contact, rating, latitude, longitude, image])
sleep(1)
driver.switch_to.window(driver.window_handles[1])
sleep(2)
new_iterator = new_iterator + 2
print(f'{int((new_iterator - 3) / 2)} ===> {[name, location, contact, rating, latitude, longitude, image]}')
csv_file.close()
driver.close()
19 changes: 19 additions & 0 deletions backend/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
from flask import Flask
from flask_cors import CORS

api = Flask(__name__)
CORS(api)


@api.route('/profile')
def my_profile():
response_body = {
"name": "Nagato",
"about": "Hello! I'm a full stack developer that loves python and javascript",
"dob": "31/07/2001"
}

return response_body


api.run(host="localhost", port=54321, debug=True)
Loading