GettyImages 1217491711
Artificial intelligence needs to be regulated to protect humans from manipulation.

AI could influence our decisions in a major way — and lead to dangerous outcomes, says technology researcher TaeWoo Kim.
According to his research with Adam Duhachek, AI messages were more persuasive when it showed people how to perform tasks.
People were also more likely to accept unfair offers from AI and disclose personal information, a sign that people are more vulnerable to manipulation than we think.
Kim believes governments need to take these behaviors into account and push for protective measures when regulating AI. 
Visit Business Insider’s homepage for more stories.

Have you ever used Google Assistant, Apple’s Siri, or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighbourhood.

Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.

Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.

In several 2018 demonstrations, Google’s AI made haircut and restaurant reservations without receptionists realising they were talking with a non-human.

Would you let Google Duplex make phone bookings for you?

It’s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future.

But what do we actually find persuasive?

My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight “how” an action should be performed, rather than “why”. For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.

We found people generally don’t believe a machine can understand human goals and desires. Take Google’s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why it’s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board.

Our research suggests people find AI’s recommendations more persuasive in situations where AI shows easy steps on how to build personalized health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense.

Does AI have free will?

Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalize those who harm others. What’s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion.

But do people think AI has free will? We did an experiment to find out.

Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right?

But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this “unfair” offer if proposed by an AI.

This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us — it’s just an algorithm, it doesn’t have free will, so we might as well just accept the $20.

The fact people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer.

To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI.

We’re surprisingly willing to divulge to AI

In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human.

We told participants to imagine they’re at the doctor for a urinary tract infection. We split the participants, so half spoke to a human doctor, and half to an AI doctor. We told them the doctor is going to ask a few questions to find the best treatment and it’s up to you how much personal information you provide.

Participants disclosed more personal information to the AI doctor than the human one, regarding potentially embarrassing questions about use of sex toys, condoms, or other sexual activities. We found this was because people don’t think AI judges our behavior, whereas humans do. Indeed, we asked participants how concerned they were for being negatively judged, and found the concern of being judged was the underlying mechanism determining how much they divulged.

It seems we feel less embarrassed when talking to AI. This is interesting because many people have grave concerns about AI and privacy, and yet we may be more willing to share our personal details with AI.

But what if AI does have free will?

We also studied the flipside: What happens when people start to believe AI does have free will? We found giving AI human-like features or a human name could mean people are more likely to believe an AI has free will.

This has several implications:

AI can then better persuade people on questions of “why”, because people think the human-like AI may be able to understand human goals and motivationsAI’s unfair offer is less likely to be accepted because the human-looking AI may be seen as having its own intentions, which could be exploitativePeople start feeling judged by the human-like AI and feel embarrassed, and disclose less personal informationPeople start feeling guilty when harming a human-looking AI, and so act more benignly to the AI.

We are likely to see more and different types of AI and robots in future. They might cook, serve, sell us cars, tend to us at the hospital, and even sit on a dining table as a dating partner. It’s important to understand how AI influences our decisions, so we can regulate AI to protect ourselves from possible harms.

TaeWoo Kim, lecturer, UTS Business School, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The ConversationRead the original article on Business Insider

Original Source: feedproxy.google.com

facebook mark zuckerberg
Facebook CEO Mark Zuckerberg leaving The Merrion Hotel in Dublin after a meeting with politicians to discuss regulation of social media and harmful content in April 2019.

We learned this week that the Trump campaign may have tried to dissuade millions of Black voters from voting in 2016 through highly targeted online ads.
The investigation, by Channel 4, highlighted a still little-understood online advertising technique, microtargeting.
This targets ads at people based on the huge amount of data available about them online.
Experts say Big Tech needs to be much more transparent about how microtargeting works, to avoid overblown claims but also counter a potential threat to democracy.
Visit Business Insider’s homepage for more stories.

The Trump campaign in 2016 used online ads to try and dissuade Black voters from voting, according to an investigation this week by UK broadcaster Channel 4.

A cache of documents obtained by Channel 4 included a database of some 200 million Americans’ Facebook accounts, broken down into characteristics like race, gender, and even conclusions about their personalities.

The database was split into different groups and one group, unsubtly labelled “deterrence,” was disproportionately made up of Black users. The idea was to use tailored online ads on platforms like Facebook to dissuade this group from heading to the ballot box. (The Trump campaign has denied this report.)

Trying to sway voters through advertising is not new, but Channel 4’s investigation was a reminder of how the Trump campaign made use of a still little-understood type of advertising called microtargeting.

Microtargeting involves using the huge amounts of data consumers give away online about who they are friends with and what they like to target ads at them.

While microtargeting has long been part of what makes the likes of Facebook and Google so profitable, it really came under mainstream scrutiny in 2018 during the data leak involving political consultancy Cambridge Analytica.

Per investigations by The Observer in 2018, Cambridge Analytica used data to build up “psychographic profiles” of people in order to more accurately target them with political ads.

According to Channel 4’s investigation this week, Cambridge Analytica was behind the 2016 ads targeting Black voters in Georgia.

And yet experts warn that while microtargeting is troubling, imputing the technology with mysterious abilities to persuade vast numbers of voters may be a distraction from real voter suppression. Ultimately, we need to understand the technology better.

Concerns about microtargeting could be overblown

The very broad strokes of how microtargeting works go like this: Your behavior online generates a wealth of data about what you are like.

This data is analysed by companies to try and draw as many conclusions about you as possible and build up a profile, from basic demographic details like your age right down to subjective things like your personality type.

When advertisers place ads on social media platforms they are able to tailor the intended audience for these ads according to these much finer details. The sell is vast scale and speed.

Felix Simon, a communications expert at the Oxford Internet Institute, told Business Insider that too often media reports on political campaigns using microtargeting take it as read their tactics have been successful in changing people’s minds.

Although the Channel 4 documentary rightly pointed out that voter turnout among Black people fell in the districts where the Trump campaign  deployed its “deterrence” campaign, Simon said this could be down to correlation rather than causation.

“I think what we see here is first and foremost a form of negative advertising which has a long and dirty history and can have voter suppression as one of its aims,” he said. “But that it actually works (and on such a scale as suggested here) is highly doubtful and — based on everything we know about targeted advertising and attempts at persuasion — most likely pales in comparison to very real voter suppression efforts, which include removing polling stations, gerrymandering, or restrictive voting laws.”

Simon is broadly skeptical of the so-called “psychographic” microtargeting employed by Cambridge Analytica — i.e. trying to use people’s data to make conclusions about their personality and target them accordingly. He believes the media has, in some ways, fallen for these firms’ own hype.

“It’s [presented as] this almost magical technology which promises so much and which is heavily pushed by the industry in this, which is all these digital campaigning companies and the political data analytics industry. And they make all these big promises,” he said.

Dr. Tom Dobber, an expert in political communication at the University of Amsterdam, agreed that the Channel 4 report did not prove the Trump campaign’s attempt to influence Black voters had been a success. More generally he does believe microtargeting can be effective — but its efficacy can be blown out of proportion.

“While the effects are reasonably strong, they should not be overestimated. It’s not like you can get a staunch Conservative to vote for Labour if you microtarget him long enough. Rather, citizens who are not already set on a party are susceptible and it seems that microtargeting ads is more effective than using untargeted ads,” he told Business Insider.

Microtargeting might not be inherently bad

Dobber said the granular nature microtargeting has the potential to be both advantageous and dangerous for democracy.

“There are clear downsides as well as upsides, e.g. sending relevant information to inactive citizens might activate them,” he said. “Microtargeting can increase turnout. These are generally good things. But there is clear potential for manipulation and also potential for the amplification of disinformation.”

He added: “I suppose it can be used for good when actors operate in good faith, but microtargeting can just as easily be detrimental for society when used in bad faith.”

Jamal Watkins, vice president for the NAACP, told Channel 4 that the use of microtargeting to systematically disenfranchise voters was a disturbing part of the documentary’s findings.

“We use similar voter file data, but it’s to motivate, persuade, encourage folks to participate. We don’t actually use the data to say ‘who can we deter and keep at home.’ That seems fundamentally it’s a shift from the notion of democracy,” said Watkins. 

Part of the problem here is that microtargeting is a new and largely unregulated market, so platforms like Facebook are not beholden to an industry standard for how they decide which ads are are allowed to appear on their platform. Facebook has broadly said it will not fact-check or constrain political speech in ads even if it is demonstrably untrue, although there have been some exceptions.

Tech platforms have also only provided restricted amounts of data to researchers like Simon and Dobber.

“It would be helpful if the large social platforms would give more information about which article is targeted to which groups, on the basis of what data, tailored to which characteristics. Now, social platforms only provide rough estimations on only a few large categories,” Dobber said.

Even if it’s useless, it’s dangerous

To Felix Simon, even if microtargeting is ineffective it poses a potential threat to individuals’ privacy because for it to be an economically viable product it requires massive amounts of data, which can in turn be sold for other purposes.

“I think the problem is it’s not just used for that [microtargeting],” he said. “Think of a scenario where the personal information they have about you, which could be everything in the US down from the way you vote to where you live, how much you earn, how much you spend on what things, and then that is being used in a context you definitely don’t want it to be used, for instance to set the rate of your health insurance,” he said.

“That for me is more important, because from what we know there is a lot of shady stuff going on with data being sold without [people’s] knowledge, potentially to people we don’t want to have our personal data,” added Simon.

Read the original article on Business Insider

Original Source: feedproxy.google.com

FILE PHOTO: Silhouettes of mobile device users are seen next to a screen projection of Youtube logo in this picture illustration taken March 28, 2018.  REUTERS/Dado Ruvic/Illustration
Silhouettes of mobile device users are seen next to a screen projection of Youtube logo in this picture illustration

A former YouTube content moderator is suing the company over what she says were its “failure to provide a safe workplace for the thousands of contractors that scrub YouTube’s platform of disturbing content.”
In the lawsuit, which was filed Monday, the ex-moderator claimed YouTube’s negligence played a role in her developing PTSD symptoms and depression while on the job.
She also claimed YouTube ignored its own workplace safety best practices and that “underqualified and undertrained” wellness coaches told moderators to “trust in God” and “take illegal drugs.”
The lawsuit places YouTube back under the spotlight after The Verge detailed moderators’ oppressive working conditions and resulting mental health conditions in a report last year, as well as Facebook’s $52 million settlement over a similar issue in May.
Visit Business Insider’s homepage for more stories.

A former YouTube moderator is suing the company over allegations that it violated California law by failing to provide a safe workplace and protect moderators’ mental health, which she said caused her to develop “severe psychological trauma including depression and symptoms associated with anxiety and PTSD.”

In a proposed class-action lawsuit filed Monday, the ex-moderator claimed that YouTube, which is owned by Google, “failed to implement the workplace safety standards it helped create” and required moderators “to work under conditions it knows cause and exacerbate psychological trauma.”

The ex-moderator, who is not named in the suit, worked as a YouTube contractor via a staffing agency called Collabera from January 2018 to August 2019. She’s seeking to force YouTube to implement stronger safety guidelines as well as create and pay into a fund to cover the medical costs required to diagnose and treat her and other moderators who may have developed mental health conditions.

YouTube and Collabera did not respond to requests for comment.

Thousands of YouTube moderators spend hours each day reviewing hours of videos containing disturbing content such as rape, torture, murder, suicide, bestiality — anywhere between 100 and 300 videos per day with an error rate of 5% or less, according to the lawsuit.

YouTube has long acknowledged the mental health risks to which it exposes moderators — and even helped develop best practices for reducing them.

Despite that, the ex-moderator said YouTube: downplayed those risks during training and on the job; required moderators to work longer hours because of demanding quotas, high turnover, and the company being “chronically understaffed”; and tried to silence moderators who raised concerns through non-disclosure agreements.

She said in the suit that prospective hires are told they “might be required to review graphic content,” but aren’t given examples, told they’ll have to review such content daily, or that doing so “can have negative mental health impacts.”

She alleged that YouTube repeatedly refused to implement product features requested by moderators that could have made reviewing content less traumatic. In one case, she said, YouTube rejected a proposed change that would have taken just a few hours to create — and even reprimanded a moderator for raising the issue again following widespread ethnic violence in Myanmar.

Her complaint also raised issues with the “wellness coaches” YouTube provided for psychological support, which allegedly weren’t available at all to moderators working the evening shift. Even those who did have access “did not receive any on-site medical care because Wellness Coaches are not medical doctors and cannot diagnose or treat mental health disorders,” she said, calling them “underqualified and undertrained.”

When she met with one coach in 2018, the coach allegedly recommended that she “take illegal drugs” to cope. In another instance, her coworker said a different coach had advised them to “‘trust in God,’ advice that was unhelpful.”

At the same time moderators feared their conversations with wellness coaches were being reported back to management, they also couldn’t voice concerns externally due to YouTube’s “sweeping” NDAs and requirements that contract agencies like Collabera instruct moderators “not to speak about the content or workplace conditions to anyone outside of their review team, including therapists, psychiatrists, or psychologists,” her complaint alleged.

Because they would lose healthcare coverage if they quit, she said moderators were faced with a dilemma: “quit and lose access to an income and medical insurance or continue to suffer in silence to keep their job.”

The lawsuit brings fresh scrutiny to YouTube’s treatment of moderators, which received major attention last December when The Verge reported extensive details about the grueling conditions endured by moderators in Texas.

A former Facebook moderator brought a similar lawsuit in 2018, which the company settled in May by agreeing to pay a total of $52 million to current and former moderators who developed mental health conditions on the job. Third-party staffing agencies are also increasingly being swept up in the spotlight. Cognizant, a major firm used by Facebook and other tech platforms, ended its contract with the social media giant last year following reporting on working conditions from The Verge and Tampa Bay Times.

Read the original article on Business Insider

Original Source: feedproxy.google.com

zhang yiming bytedance
ByteDance CEO Zhang Yiming.

TikTok’s parent company ByteDance won’t be selling off TikTok’s US operations after all, Reuters reported Monday.
Instead, ByteDance is trying to set up a partnership with Oracle, allowing the US tech giant to manage TikTok’s US user data.
Microsoft announced on Sunday that its bid to acquire TikTok US had been rejected.
The Trump administration tried to force a sale of TikTok’s US operations using two executive orders in August, setting a deadline of September 15.
It’s not clear whether the US will allow TikTok to pursue a partnership with Oracle, rather than an outright sale.
Visit Business Insider’s homepage for more stories.

TikTok won’t sell its US business after all, a report suggests — meaning it will have to convince President Donald Trump to walk back an executive order demanding its sale to a US company by November 12.

Reuters reported Monday that TikTok was abandoning plans to sell off its US operations, and was instead striking a deal to make Oracle, the US tech giant, its “technology partner.” This chimes with a Wall Street Journal report from Sunday that said Oracle would be announced as TikTok’s “trusted tech partner.”

Microsoft announced on Sunday that TikTok’s parent company ByteDance had rejected its bid to buy TikTok US. Oracle was previously reported to be a potential buyer.

Sources told Reuters that as a technology partner, Oracle would manage TikTok’s US user data. It is not clear how this would work: TikTok says it keeps all its worldwide user data on US servers, with backups in Singapore. The same sources said Oracle is also trying to negotiate a stake in TikTok US.

Chinese state media are also reporting that TikTok is no longer for sale. Per Reuters, state-run TV station CGTN reported Monday that ByteDance won’t be selling TikTok’s US operations to Microsoft or Oracle, and that it won’t be handing over any of its source code.

The South China Morning Post reported Sunday that TikTok wouldn’t sell its all-important algorithm, citing a source familiar with the talks. In late August, Beijing passed new technology export laws that would force ByteDance to seek government permission to sell its algorithm to foreign company — effectively kneecapping sale talks. TikTok’s “For You” page is a key part of how the app drives user engagement.

A TikTok spokesman declined to comment on the various reports when contacted by Business Insider.

Not clear how Trump will react

If TikTok is going to partner with Oracle, the White House will need convincing.

President Trump last month signed two executive orders targeting ByteDance. The first ordered all US citizens and companies to cease any “transactions” with the firm by September 20. The second ordered ByteDance to sell TikTok’s US operations by November 12. If TikTok is going to propose a new structure, whereby it partners with Oracle instead of selling, Trump will have to walk back the second executive order.

The executive orders claim that because TikTok’s parent company is Chinese, the app’s user data could be passed to the Chinese government. TikTok maintains that it doesn’t hand over user data to Beijing, and is suing the Trump administration, claiming it was denied due process. 

Previously, Trump gave TikTok a deadline of September 15 to sell off its operations to a US company, and insisted last week that the deadline would not be extended. It is not clear how he will react to the news that it’s trying to avoid a sale.

ByteDance will also have to convince the Committee on Foreign Investment in the United States (CFIUS) to wave through its partnership with Oracle. Sources told Reuters it plans to use a case from two years ago as precedent, where a Chinese company called China Oceanwide Holdings Group Co Ltd successfully bought US insurance company Genworth Financial Inc. In that deal, China Oceanwide Holdings agreed to engage a US-based third-party company to manage its customer data.

Read the original article on Business Insider

Original Source: feedproxy.google.com

Visit Us On TwitterVisit Us On FacebookVisit Us On YoutubeVisit Us On Instagram