Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[facebook] add support #5626

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open

[facebook] add support #5626

wants to merge 27 commits into from

Conversation

zWolfrost
Copy link

@zWolfrost zWolfrost commented May 22, 2024

Fixes #470 and #2612 (probably duplicate)
For now it supports Account Photos & Photo Albums.
The only way it can work is by making one request per post, so it's not really optimized unfortunately.

@zWolfrost
Copy link
Author

zWolfrost commented May 23, 2024

It looks like Facebook blocks your account for about one hour when running the extractor for too many images.
It happened to me, after running the extractor and downloading 1800 of them.
Also, it only appears to be an account ban (logging out removes the block), and it only prevents you from viewing images by opening them from a link (opening them using the react UI works).
It's probably best if the extractor actively avoids using the imported cookies, unless requested otherwise (with proper warnings).
Please let me know your thoughts.

@zWolfrost zWolfrost marked this pull request as ready for review May 31, 2024 14:20
@MrJmpl3
Copy link

MrJmpl3 commented Jun 13, 2024

It's probably best if the extractor actively avoids using the imported cookies, unless requested otherwise (with proper warnings).

I think photos and videos have a signature in the url, Facebook maybe can track you and ban using this info.

@zWolfrost
Copy link
Author

I think photos and videos have a signature in the url, Facebook maybe can track you and ban using this info.

I don't think i understand how you would be able to ban someone using a signature in the photo url. I think the most reasonable option would just be to use the request cookies (which include your account ids and such) to account-ban you.

As i mentioned, it's not really a complete ban, it's only limited to some parts of the UI, and logging out (thus not sending the account request cookies) does remove the block, with the tradeoff that you can't view private or R-18 images.

I still don't know if being logged out in advance prevents the ban altogether. If that's the case then i think i will add a warning about that.

also added author followups for singular images
@zWolfrost
Copy link
Author

After doing some more testing i can tell that not using cookies still gets you blocked from viewing images, in the sense that you are forced to log in, and it happens much faster than when using them.
I think it's best not to use cookies unless facebook forces the user to login, and print a small warning whenever the extractor uses them. Doing this i could extract about 2400 images before getting temporarily blocked, and i think that's pretty good.

Also, I'd love to know if the extractor works for anyone else other than me, so please feel free to let me know.

@AdBlocker69
Copy link

Hi, I've tested your version and it seems to be working fine for pictures. I'm planning to save quite a few images from a public Facebook page and was wondering if using one of the --sleep commands could circumvent you from being blocked (or if Facebook just reacts to an arbitrary number of request, no matter the frequency). And just overall - does that mean I'm generally unable to connect to the Facebook services (like using gallery-dl with it) or does it just prevent browser/account interaction?
Let me not forget - thanks for your work. I hope this gets implemented in the official project soon. Facebook is (still) such a big platform; so having a tool like gallery-dl supporting it is pretty important (imo)...

Facebook video support would be nice too: Luckily in my case they weren't that many so I was able to download them one by one with yt-dlp... ...But they also don't support Album/Account video downloading (yet) like they do with YouTube for example.

@zWolfrost
Copy link
Author

Hi, thank you for your feedback.

I'm not sure if waiting to continue extracting would work, and if it did work, i have no idea for how long or after how many images the wait should start. That would require a lot of testing and unfortunately every time i get blocked i have to wait about 6 hours to try again.

To be more specific, the "block" which I'm talking about only prevents you from accessing images by their URL (the way the extractor does it), but you can still access them by using the React user interface.
That means, accessing them by clicking them, scrolling with the arrows etc., but if, for example, you reload the page while viewing one, an error pops up about you "using this feature too fast".
When not using an account, instead, you don't get the error but you get redirected to the login page (aside from that the behavior is the same though).

As far as i can tell this block is only limited to this and you can do any other thing on Facebook

About the video support, i will keep that in mind. I'm not sure how yt-dlp downloads them and i will check that out when i have time.

@AdBlocker69
Copy link

Thanks for the info :)
Btw, do you know what to do when let's say, you have downloaded 2400 pictures and get blocked afterwards, and want to continue downloading from the same profile; can you just continue after the 6 hours because I guess when checking for duplicates gallery-dl still does requests for the 2400 images (as it goes chronologically from newest to oldest post) or does that work differently?

@zWolfrost
Copy link
Author

No, I'm sorry, once you get blocked the photo page doesn't load at all (assuming you're loading it by its URL) so there is no way to know the metadata and stuff. This is the reason why I just added a way to continue the extraction from an image in the set/album instead of having to start from the beginning. Just take the photo URL and add "&setextract" to it to download the whole set from there instead of the photo alone. The user will be prompted with this URL if they get blocked while extracting

@AdBlocker69
Copy link

Good idea for implementing that 👍🏻 Does it only work with the prompted URL because I just tried it up front by using an image link and adding "&setextract" to it but it gave me an 'unknown command' error after downloading just the singular image?

Also, it seems like your video extraction only pulls the version without audio (the best one in the list of formats in yt-dlp but it gets merged with an audio-only version there by default)... So it would be best to either add the ffmpeg merge-by-default or have it select the "hd" format by default which has video+audio.

@zWolfrost
Copy link
Author

zWolfrost commented Jun 18, 2024

The "&setextract" feature didn't work to you because you probably passed it to gallery-dl without using the double quotes (") and the command prompt recognized the "&" as the split character between two commands (you can use the ampersand to execute two commands in one line). That would also explain why it downloaded the image, and then gave you an "unknown command" message, as you probably don't have a command assigned to the "setextract" keyword

By the way, after further inspection, I don't think there's a way to make an "all profile videos" extractor, as they don't share a set id i can use to navigate though them all.

Good catch for the audio thing though, i wasn't wearing headphones :)
I have fixed it right now, the audio gets downloaded as well. By default they will be separate, to merge the two you'll have to let youtube-dl/yt-dlp handle the download, by adding "videos": "ytdl" in the facebook extractor configuration

@AdBlocker69
Copy link

AdBlocker69 commented Jun 18, 2024

Okay, several things I just realized by doing some trial and error 😅:
First of all, I need to use the full link for gallery-dl to detect what "set" I even want to continue downloading from; I had just used the short version (as marked blue) and wondered why it didn't do anything anymore after downloading the image the link directs to.

ss

Secondly, I need to then put the full link into quotation marks since it otherwise, as you said, detects the text after the ampersand as a secondary command (as marked red), giving me a 'syntax error' and not processing the rest further after downloading the image the link directs to.

ss2

So this is how it's to be written to avoid any errors due to the link format and command logic etc.

ss3

Now I got it to work successfully 👌🏻

Alternatively you can also just add the set id given to you by the previous downloading as it is in the folder name where your set images were saved to (e.g. the 'Timeline photos' set (all account images)) manually after the 'short' image link with the addition of "&set=" in front of it:

ss4

@zWolfrost
Copy link
Author

I'm sorry if things got confusing 😥 at least you managed to make it work now.
Of course, someone who would've had their download blocked would have been prompted with the full URL already so there is no chance of this happening in a real situation (at least i hope so).
I will see if there is a way to maybe get the set id by inspecting the photo page by itself (if i remember correctly there should be a default one)

@AdBlocker69
Copy link

AdBlocker69 commented Jun 18, 2024

No problem, that's just what happens when the outgoing situation is slightly different :)
I just like to use short links as sometimes, when taking them directly from browsing the web, they have some certain parameters in them (like a less than source size of an image etc.) which are undesirable. So it's more or less best practice for me to take the link as 'raw' as possible to avoid any of that.
In this case the extra information given in the link was vital though...

@zWolfrost
Copy link
Author

zWolfrost commented Jun 18, 2024

There, i just had to change the matching URL pattern a little. Now it works even without including the set id. Hopefully it's the same for you. I recommend you to avoid this anyway, as Facebook acts a little weird when you navigate images without their set id. Sometimes their sequence gets changed or some images get skipped altogether. Or maybe it just works fine and i unintentionally bugfixed it some while ago, i don't know.

@AdBlocker69
Copy link

AdBlocker69 commented Jun 20, 2024

Works 🤙🏻
Thanks; I guess it's generally helpful to have it work like that too for when you maybe just have the image link from a 3rd source of whatever and want to download from that point back - and you then don't have to specifically go back to the profile page itself to find the set id.
So quality of life wise good, no question.

the extractor should be ready :)
@zWolfrost
Copy link
Author

zWolfrost commented Aug 9, 2024

Ah, gotcha. Straightforward for just one photo, but I'm looking to do this for an entire album of photos (for my own picky reasons).

I'm thinking this will involve getting the "Open image in new tab" link, then grabbing the extended filename format from the link for each photo. Any idea how to go about doing this?

I don't think I understand what you mean. The extractor is capable of extracting entire albums of photos by itself, just copy the album URL (it should have this format https://www.facebook.com/media/set/?set=a.10152716010956729&type=3) and pass it to the command line, along with the parameter i suggested before. Remember to put the URL in double quotes (").

@lobt4
Copy link

lobt4 commented Aug 9, 2024

Ah, gotcha. Straightforward for just one photo, but I'm looking to do this for an entire album of photos (for my own picky reasons).
I'm thinking this will involve getting the "Open image in new tab" link, then grabbing the extended filename format from the link for each photo. Any idea how to go about doing this?

I don't think I understand what you mean. The extractor is capable of extracting entire albums of photos by itself, just copy the album URL (it should have this format https://www.facebook.com/media/set/?set=a.10152716010956729&type=3) and pass it to the command line, along with the parameter i suggested before. Remember to put the URL in double quotes (").

My bad, I wasn't being clear. I'm looking to download from a https://www.facebook.com/xxxxxxxxxx/photos page (since not all of the photos are in albums) and instead of every photo being saved with this filename format:
356466573-94e36eec-de24-4928-abaf-b9fdd110ee84

I want the extractor to automatically save each photo with this filename format:
356466546-2fb670e8-779a-4172-830e-87be8e0fcfe2

@zWolfrost
Copy link
Author

My bad, I wasn't being clear. I'm looking to download from a https://www.facebook.com/xxxxxxxxxx/photos page (since not all of the photos are in albums) and instead of every photo being saved with this filename format: 356466573-94e36eec-de24-4928-abaf-b9fdd110ee84

I want the extractor to automatically save each photo with this filename format: 356466546-2fb670e8-779a-4172-830e-87be8e0fcfe2

The extractor also supports downloading all photos from an account.
Just pass the account URL (e.g. https://www.facebook.com/facebook/photos/) and it will automatically extract all of them.

@lobt4
Copy link

lobt4 commented Aug 9, 2024

My bad, I wasn't being clear. I'm looking to download from a https://www.facebook.com/xxxxxxxxxx/photos page (since not all of the photos are in albums) and instead of every photo being saved with this filename format: 356466573-94e36eec-de24-4928-abaf-b9fdd110ee84
I want the extractor to automatically save each photo with this filename format: 356466546-2fb670e8-779a-4172-830e-87be8e0fcfe2

The extractor also supports downloading all photos from an account. Just pass the account URL (e.g. https://www.facebook.com/facebook/photos/) and it will automatically extract all of them.

I'm using the command __main__.py https://www.facebook.com/xxxxxxxxxx/photos and this is the error message I'm getting (can visit photos page on a browser)
asd

@zWolfrost
Copy link
Author

zWolfrost commented Aug 9, 2024

Thanks for the report, that's definitely not supposed to happen. Is the profile you are trying to extract private? What is your OS language? (That has caused me problems in the past and if it's not English then i will start troubleshooting from there). Thanks for your patience.

@lobt4
Copy link

lobt4 commented Aug 9, 2024

Thanks for the report, that's definitely not supposed to happen. Is the profile you are trying to extract private? What is your OS language? (That has caused me problems in the past and if it's not English then i will start troubleshooting from there). Thanks for your patience.

No worries, I appreciate your help.
FB profile is public, OS language is English (US).

@zWolfrost
Copy link
Author

zWolfrost commented Aug 9, 2024

OK, are you using a VPN or anything that could change your country of residence? Some people country block their profile. Also, is this only happening with this one profile? If yes, please consider emailing me the profile you are trying to extract if you don't feel comfortable writing it here. I won't judge or anything, I've seen a lot of stuff. It would speed up the process a lot and i would really appreciate it. I really have no idea where to start right now...

@lobt4
Copy link

lobt4 commented Aug 9, 2024

OK, are you using a VPN or anything that could change your country of residence? Some people country block their profile. Also, is this only happening with this one profile? If yes, please consider emailing me the profile you are trying to extract if you don't feel comfortable writing it here. I won't judge or anything, I've seen a lot of stuff. It would speed up the process a lot and i would really appreciate it. I really have no idea where to start right now...

Nope, not using a VPN and it seems like this is happening for every profile's "photos" page, but not for the photo albums that they've made.

I have a workaround where I use an extension to grab all the photo links on a "photos" page which I pass to gallery-dl in a text file to download the entire page. Though this circles back to my first question of trying to figure out how to automatically save all downloaded photos with the image ID (e.g. 344582_258993345612475_8286065_n.jpg) filename format instead of the current default post ID.

@zWolfrost
Copy link
Author

Nope, not using a VPN and it seems like this is happening for every profile's "photos" page, but not for the photo albums that they've made.

I have a workaround where I use an extension to grab all the photo links on a "photos" page which I pass to gallery-dl in a text file to download the entire page. Though this circles back to my first question of trying to figure out how to automatically save all downloaded photos with the image ID (e.g. 344582_258993345612475_8286065_n.jpg) as its filename instead of the current default post ID.

I assume that you mean "saving photos by their filename" without passing the argument that I've mentioned before every time (-f "{filename})
In that case, you can configure gallery-dl to do exactly that every time by adding a config.json file in those locations with something like this in it:

{
	"extractor": {
		"facebook": {
			"filename": "{filename}"
		}
	}
}

I will still definitely try to find the cause of the main issue.

@lobt4
Copy link

lobt4 commented Aug 9, 2024

Nope, not using a VPN and it seems like this is happening for every profile's "photos" page, but not for the photo albums that they've made.
I have a workaround where I use an extension to grab all the photo links on a "photos" page which I pass to gallery-dl in a text file to download the entire page. Though this circles back to my first question of trying to figure out how to automatically save all downloaded photos with the image ID (e.g. 344582_258993345612475_8286065_n.jpg) as its filename instead of the current default post ID.

I assume that you mean "saving photos by their filename" without passing the argument that I've mentioned before every time (-f "{filename}) In that case, you can configure gallery-dl to do exactly that every time by adding a config.json file in those locations with something like this in it:

{
	"extractor": {
		"facebook": {
			"filename": "{filename}"
		}
	}
}

I will still definitely try to find the cause of the main issue.

Ah, well, that's embarrassing. I kept thinking I had to manually replace {filename} argument with the image filename inside the curly brackets instead of just using the argument as is. Not very smart of me.
The argument works just like you said and I'm able to automatically save photos with my preferred filenames.
Apologies for the confusion and thanks for your patience.

@zWolfrost
Copy link
Author

Ah, well, that's embarrassing. I kept thinking I had to manually replace {filename} argument with the image filename inside the curly brackets instead of just using the argument as is. Not very smart of me. The argument works just like you said and I'm able to automatically save photos with my preferred filenames. Apologies for the confusion and thanks for your patience.

Don't worry, happens to the best of us. By the way, I've just fixed a thing that could have been the cause. Could you try again with the latest commit and let me know if it works instead?

@lobt4
Copy link

lobt4 commented Aug 10, 2024

Don't worry, happens to the best of us. By the way, I've just fixed a thing that could have been the cause. Could you try again with the latest commit and let me know if it works instead?

Errored out the same as last time, perhaps this is happening to just my machine for some reason:
asd

@AdBlocker69
Copy link

Hi,
I just tried having yt-dlp handle the facebook video extraction and I'm getting this error:
image
Might it be that it only searches for "youtube-dl" and therefore can't find it?

@zWolfrost
Copy link
Author

zWolfrost commented Aug 15, 2024

@AdBlocker69 Please enter this line in the CMD: py -m yt_dlp (Windows) / python3 -m yt_dlp (Linux)

If it prints No module named yt_dlp or such, then it's something to do with your yt-dlp installation;

If it's something like You must provide at least one URL then should try to see if this happens with other extractors as well (reddit and twitter, for example) and let me know what happens. Remember try the other extractors on the same repo you are using for the facebook one.

For your information, gallery-dl should, by default, try to import "yt_dlp" followed by "youtube_dl" as fallback. For me it works fine so this is kinda odd.

@AdBlocker69
Copy link

AdBlocker69 commented Aug 15, 2024

I don't have yt-dlp installed with Python but as the standalone exe, could that be the culprit? (I get No module named yt_dlp...)

@zWolfrost
Copy link
Author

zWolfrost commented Aug 15, 2024

Yeah, it most likely is. You should probably install yt-dlp with pip (pip install yt-dlp) (follow a tutorial if you don't have pip installed and don't know how to install it, there's a lot of them). As far as i know there is no way to make gallery-dl use the yt-dlp executable instead of the python module.

@AdBlocker69
Copy link

AdBlocker69 commented Aug 15, 2024

Thanks, this way it works.
By default yt-dlp pulls the best video + best audio and merges them using FFmpeg.
In this case it would be "452530707751531v" + "1054256548546316a".
image
How'd I go about telling yt-dlp to, for example, pull "hd" instead?
Because the following doesn't do the trick (normal way of selecting the yt-dlp format to use in gallery-dl; gallery-dl configuration documentation website):
image

@zWolfrost
Copy link
Author

The reason why it's not working is because you should put that argument in the "downloader" options, not the "extractor" ones. Took me a bit 😅

{
	"extractor": {
		"#": "not here!"
	},

	"downloader": {
		"ytdl": {
			"format": "sd"
		}
	}
}

@AdBlocker69
Copy link

LOL, okay thanks, that works better now 😂...
Logically thinking it makes sense to be in "downloader" but I simply didn't think of that even being an option as I saw the same argument existing in the "extractor" options.
What is the one in there for then?

Fixed some metadata attributes from not decoding correctly from non-latin languages, or not showing at all.
Also improved few patterns.
-Added tests
-Fixed video extractor giving incorrect URLs
-Removed start warning
-Listed supported site correctly
I've chosen to remove the "reactions", "comments" and "views" attributes as I've felt that they require additional maintenance even though nobody would ever actually use them to order files.
I've also removed the "title" and "caption" video attributes for their inconsistency across different videos.
Feel free to share your thoughts.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Request] site support: facebook.com
4 participants