You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 2, 2023. It is now read-only.
Users want to know they are interacting with real people on social websites but bad actors often want to promote posts with fake engagement (for example, to promote products, or make a news story seem more important). Websites can only show users what content is popular with real people if websites are able to know the difference between a trusted and untrusted environment.
Wrong. Wrong wrong wrong.
If I am running an artificial engagement campaign there is absolutely nothing stopping someone from acquiring ten unique devices with authentic environments and operating them manually. It does not necessarily have to be a complete automation.
More specifically this point:
if websites are able to know the difference between a trusted and untrusted environment.
This SHOULD read: if websites are able to know the difference between a trusted and untrusted USER. That is, authentication of a person, which should not be substituted for authentication of client software. Huge difference.
The funny thing is that this proposal is perceived very much as a sly way to identify, at the very least, whether a client requests on behalf of real people for the purposes of increasing the value of advertisements.
What is more, websites that want to show “what is popular” on their site is not necessarily good for society. It creates the very incentive for artificially boosting content on social media. Look how well that worked out for the world regarding Twitter.
I feel badly for you all.
The text was updated successfully, but these errors were encountered:
If I am running an artificial engagement campaign there is absolutely nothing stopping someone from acquiring ten unique devices with authentic environments and operating them manually. It does not necessarily have to be a complete automation.
No need to reference this as a hypothetical. This is already pretty common.
Maybe this will slightly increase the costs of fraudsters by pushing everyone from automated to manual forms of click fraud, but pretending that this comes anywhere close to solving the stated problem demonstrates a shocking lack of basic background knowledge by the authors.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Wrong. Wrong wrong wrong.
If I am running an artificial engagement campaign there is absolutely nothing stopping someone from acquiring ten unique devices with authentic environments and operating them manually. It does not necessarily have to be a complete automation.
More specifically this point:
This SHOULD read: if websites are able to know the difference between a trusted and untrusted USER. That is, authentication of a person, which should not be substituted for authentication of client software. Huge difference.
The funny thing is that this proposal is perceived very much as a sly way to identify, at the very least, whether a client requests on behalf of real people for the purposes of increasing the value of advertisements.
What is more, websites that want to show “what is popular” on their site is not necessarily good for society. It creates the very incentive for artificially boosting content on social media. Look how well that worked out for the world regarding Twitter.
I feel badly for you all.
The text was updated successfully, but these errors were encountered: