Facebook is using AI to predict users’ future behavior and selling that data to advertisers

Newspeak: The AI software that powers this capability, called ‘FBLearner Flow,’ was first announced in 2016, though it was presented as a technology to make user experience better, not as a marketing tool.

Facebook is using AI to predict users’ future behavior and selling that data to advertisers

In confidential documents seen by the Intercept, Facebook touts its ability to “improve” marketing outcomes with what it calls “loyalty prediction.”

Newspeak: The AI software that powers this capability, called “FBLearner Flow,” was first announced in 2016, though it was presented as a technology to make user experience better, not as a marketing tool.

How it works: The data it uses is anonymized, but includes users’ “location, device information, Wi-Fi network details, video usage, affinities, and details of friendships, including how similar a user is to their friends.”

Facebook’s defense: A Facebook spokesperson had this to say about the story: “Facebook, just like many other ad platforms, uses machine learning to show the right ad to the right person. We don’t claim to know what people think or feel, nor do we share an individual’s personal information with advertisers.”

Happy Friday the 13th: This is just the latest in a seemingly unending parade of ethical dilemmas in Facebook’s 14 years of existence. Of course, this one follows on the heels of CEO Mark Zuckerberg’s two days of testimony on Capitol Hill in connection with a separate scandal. Another data privacy drama will certainly fuel calls to regulate the social-media giant.

  • Publisher: MIT Technology Review
  • Twitter: @techreview
  • Citation: Web link

Quite a lot has been going on:

Facebook Uses Artificial Intelligence To Predict Users’ Future Actions For Advertisers, According To A Confidential .

The recent document, described as ‘confidential,’ outlines a new advertising service that expands how the social network sells corporations’ access to its users and their lives: Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed ‘FBLearner Flow.’

Spiritually, Facebook’s artificial intelligence advertising has a lot in common with political consultancy Cambridge Analytica’s controversial ‘psychographic’ profiling of voters, which uses mundane consumer demographics (what you’re interested in, where you live) to predict political action. But unlike Cambridge Analytica and its peers, who must content themselves with whatever data they can extract from Facebook’s public interfaces, Facebook is sitting on the motherlode, with unfettered access to staggering databases of behavior and preferences. A 2016 ProPublica report found some 29,000 different criteria for each individual Facebook user.

Zuckerberg has acted to distance his company from Cambridge Analytica, whose efforts on behalf of Donald Trump were fueled by Facebook data, telling reporters on a recent conference call that the social network is a careful guardian of information:

The vast majority of data that Facebook knows about you is because you chose to share it. Right? It’s not tracking. There are other internet companies or data brokers or folks that might try to track and sell data, but we don’t buy and sell. ‘ For some reason, we haven’t been able to kick this notion for years that people think we will sell data to advertisers. We don’t. That’s not been a thing that we do. Actually it just goes counter to our own incentives. Even if we wanted to do that, it just wouldn’t make sense to do that.

The Facebook document makes a similar gesture toward user protection, noting that all data is ‘aggregated and anonymized [to protect] user privacy,’ meaning Facebook is not selling lists of users, but rather essentially renting out access to them. But these defenses play up a distinction without a difference: Regardless of who is mining the raw data Facebook sits on, the end result, which the company eagerly monetizes, are advertising insights that are very intimately about you ‘ now packaged and augmented by the company’s marquee machine learning initiative. And although Zuckerberg and company are technically, narrowly correct when they claim that Facebook isn’t in the business of selling your data, what they’re really selling is far more valuable, the kind of 21st century insights only possible for a company with essentially unlimited resources. The reality is that Zuckerberg has far more in common with the likes of Equifax and Experian than any consumer-oriented company. Facebook is essentially a data wholesaler, period.

  • logo
  • Publisher: UPROXX
  • Date: 2018-04-13T19:58:37+00:00
  • Author: The Intercept
  • Twitter: @UPROXX
  • Citation: Web link

It’s Time to Regulate ‘Smart City’ Technology, Too

This isn’t just about Facebook: When Google is building cities and cars are turning into data-harvesting machines, the need for laws that protect users has never been more urgent.

There’s a reason why one technology reporter compared wide-ranging questions posed by the Senate to Facebook CEO Mark Zuckerberg on Tuesday to ‘a five-hour tech support call.’ The hearing revealed a basic lack of understanding about Facebook’s data-gathering business model and consumer-facing functions.

On Wednesday, day two of Zuckerberg’s congressional testimony in the wake of the Cambridge Analytica data scandal, House members seemed to have done a little more homework. But even lawmakers who started off sharp wound up leaning on Zuckerberg for advice for regulating his own company. Their tougher questions didn’t add up to a clear picture of what’s gone wrong at the social media giant.

‘Widespread concerns have been raised about the lack of security controls in many IoT devices,’ stated the U.S. Government Accountability Office in a May 2017 report on the Internet of Things. ‘[That] is in part because many vehicles, equipment, and other increasingly IoT-enabled devices were built without anticipating threats associated with Internet connectivity or the requisite security controls.’

Some experts argue that the complex software of self-driving vehicles will make them safer from hacks than current cars. And automakers are now investing in ramped-up cybersecurity protections. But if any industry has shown the limits of the concept of self-regulation, it’s car companies. Look at the history of basic vehicle safety and environmental laws. Today’s cars kill people at about half the rate they did in the 1970s, largely because federal safety laws forced U.S. automakers (usually against their will) to re-engineer their vehicles to pass increasingly stringent crash tests and install three-point seatbelts, airbags, anti-lock brakes, and other safety equipment. The recent Volkswagen emissions deception scandal reminded us how insufficient the incentives to comply with environmental regulations are for large auto companies, and revealed the cozy relationship many of them have with the federal government.