You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The scraper is ... strange. Right now every time it downloads an Excel file of recent filings. Every time it also re-downloads a bunch of historic PDFs,one of which isn't parsing correctly now but has data already in the Excel file.
We can't possibly know if the scraper will fail on when the next generation of PDF is released.
New workaround to at least bypass some of that:
ZIP up BLN's existing cache archive (through ca branch) and include the output CSV in the ZIP.
Throw that up in Google Cloud Storage.
Take the last good CSV and put that separately in Google Cloud Storage.
Rework the scraper to not download any of those existing PDFs through today's dates.
Rework the scraper to download and import the processed data from Google Cloud Storage. Import it into the scraper before writing the final output CSV.
This may break as soon as the scraper detects the first July 2024 layoffs and tries to parse that PDF, but at least some of the runtime, data transfer and senseless processing can be avoided.
stucka
changed the title
CA scraper down
CA scraper hopelessly inefficient
Jul 8, 2024
No description provided.
The text was updated successfully, but these errors were encountered: