Facebook user data leakage and the growth of extremist communities

 













Facebook user data leakage and the growth of extremist communities






Facebook user data leakage and the growth of extremist communities 



Facebook was hit by a huge wave of negativity. Users were widely involved in the flash mob deletefacebook (even Tesla and SpaceX boss Elon Musk deleted his page), and the IT giant's trust rating has dropped to an all-time low. According to polls, only 41 percent of Americans continue to trust Zuckerberg's brainchild for data processing. Even worse for Facebook , was the threat from 3,000 advertisers, who promised to stop all ads until Zuckerberg explained the data leak .

Facebook executives had to react. A week after the incident, the company promised to restrict third-party apps' access to user information, including data on religious and political affiliation, personal relationships, education and employment.

A year later, the social network introduced a tool for controlling personal information: if desired, users could stop the transfer of data to third parties. In words, this decision sounds very generous, but here it was not without its nuances. For example, user information continues to be collected, albeit in a non-personal form. And not everyone was able to stop the transfer of data: according to a study by Pew Research, 74 percent of users did not even know about this function. In other words, after innovation, the situation has not changed in any way.

Why did the developers hide this feature so deeply in the settings menu? So people put pressure on him as little as possible. After all, the less Facebook knows about its users, the less income it will get from them.

fake security



It is very difficult to evaluate the effectiveness of measures to protect users today, because they are all very opaque. In February 2019, Zuckerberg announced that he plans to invest more than $3.7 billion in Facebook security.

We are now taking steps that we could not have taken a few years ago. For example, this year we plan to spend more on security than all of our proceeds during the IPO (the initial public offering - the first public sale of shares), Zuckerberg wrote on his page.

$3.7 billion is a really big amount. But that only represents 5 percent of Facebook's annual revenue This is less than the $5 billion fine paid by the social network to the US Treasury for leaking user data .

Where did this money go? Zuckerberg was silent about this. Two years later, it's hard to say if it really helped improve the security of the social network in any way. In addition, at the beginning of 2021, there was another major leak of information about 533 million Facebook users.

But Facebook's algorithm has not stopped collecting and processing data about user actions.

Who is at the helm?




It is clear that social networking algorithms have become so advanced over time that today they are in complete control of what the user sees, knows and feels. The impact of new technologies is clearly demonstrated by a 2014 Facebook study. The company decided to investigate how it alters the moods of people who will show specially selected news. Nearly 700,000 people took part in a week-long trial that they didn't even know about. They were divided into two groups: one was shown exclusively negative content, and the second was shown only negative content.

“When we reduced the percentage of positive messages in the feed, the same people started writing positive posts less often, more often negative posts. When we cleaned up the bar from the negative side, the opposite trend was observed. These results suggest that other Facebook users’ emotions influence ours. ,” said the study authors. Oddly enough, the changes became noticeable within a week. It's not hard to imagine what would happen if you manipulated people for several years.

The study sparked a wave of criticism against Facebook . The trial authors have been accused of testing algorithms on users without permission. However, sociologists did not break the law: when creating a page on a social network, a person automatically agrees to all conditions, including conducting such a search.

But most importantly, the published material revealed the whole essence of algorithmic tapes. Since they automatically design content for each of their 2.7 billion users, social media developers have no actual control over them. And algorithms, in turn, are unable to assess and understand their impact on living people.

In 2018, researchers within Facebook found that the computational feed "exploits the human brain's tendency to cleavage." Experts have come to the conclusion that the algorithms must stop in time, otherwise they will “feed” users with increasingly dangerous content to attract their attention and increase the time they spend on the platform. Therefore, the social network actively publishes posts that incite hatred and radical political views, because it is such content that causes violent reactions.

Sociologist Monica Lee, an employee at Facebook, pointed out that the platform's algorithms are responsible for the growth of the number of extremist communities. "64 percent of all new subscribers to extremist groups came from recommendations," Lee wrote. However, Facebook's management ignored its basic research, and suggestions for improving the situation remained unfulfilled. The conclusion is simple: "humanizing" algorithms are not profitable for social networks.







No comments:

Post a Comment