Zuckerberg wants Facebook to become online ‘metaverse’

Mark Zuckerberg has laid out his vision to transform Facebook from a social media network into a “metaverse company” in the next five years.

A metaverse is an online world where people can game, work and communicate in a virtual environment, often using VR headsets.

The Facebook CEO described it as “an embodied internet where instead of just viewing content – you are in it”.

He told The Verge people shouldn’t live through “small, glowing rectangles”.

“That’s not really how people are made to interact,” he said, speaking of reliance on mobile phones.

“A lot of the meetings that we have today, you’re looking at a grid of faces on a screen. That’s not how we process things either.”

‘Infinite office’
One application of the metaverse he gave was being able to jump virtually into a 3D concert after initially watching on a mobile phone screen.

“You feel present with other people as if you were in other places, having different experiences that you couldn’t necessarily do on a 2D app or webpage, like dancing, for example, or different types of fitness,” he said.

Facebook is also working on an “infinite office” that lets users create their ideal workplace through VR.

“In the future, instead of just doing this over a phone call, you’ll be able to sit as a hologram on my couch, or I’ll be able to sit as a hologram on your couch, and it’ll actually feel like we’re in the same place, even if we’re in different states or hundreds of miles apart,” he said. “I think that is really powerful.”

Facebook has invested heavily in virtual reality, spending $2bn (£1.46bn) on acquiring Oculus, which develops its VR products.

In 2019, it launched Facebook Horizon – an invitation-only immersive environment where users can mingle and chat in a virtual space with a cartoon avatar through Oculus headsets.

Zuckerberg admitted current VR headsets were “a bit clunky” and needed improving for people to work in them all day.

But he argued that Facebook’s metaverse would be “accessible across… different computing platforms” including VR, AR (augmented reality), PC, mobile devices and games consoles.

Metaverse origins
The concept of a metaverse is popular with tech companies who believe it could be a new 3D internet, connecting digital worlds where people hang out in virtual reality.

Its origins come from Neal Stephenson’s 1992 science fiction novel Snow Crash, where it served as a virtual-reality-based successor to the internet.

Tech firms have tried to implement metaverse elements in popular games including Animal Crossing, Fortnite and Roblox.

This includes planning live events such as concerts and tournaments where millions of players can interact from around the globe.

Behavioural data
“Part of the reason Facebook is so heavily invested in VR/AR is that the granularity of data available when users interact on these platforms is an order of magnitude higher than on screen-based media,” Verity McIntosh, a VR expert at the University of the West of England, told the BBC.

“Now it’s not just about where I click and what I choose to share, it’s about where I choose to go, how I stand, what I look at for longest, the subtle ways that I physically move my body and react to certain stimuli. It’s a direct route to my subconscious and that is gold to a data capitalist.

“It seems unlikely that Facebook will have an interest in changing a business model that has served them so well to prioritise user privacy or to give users any meaningful say in how their behavioural data in the ‘metaverse’ will be used.”

Tech giants like Facebook defining and colonising the space, while traditional governance structures struggle to keep up with the technological change could cause further issues, she added.

Didi shares fall on reports China is planning penalties

Shares in Chinese ride-hailing giant Didi slumped by more than 11% in New York on Thursday.

It comes after a report that regulators in Beijing are considering serious penalties for the company.

Didi made its US stock market debut at the end of last month, raising $4.4bn (£3.2bn).

Just two days later, China’s internet regulator launched an investigation into the company over how it collects user data.

The penalties could include fines, suspending some operations or government investment in the company, according to Bloomberg News.

Citing people familiar with the matter, the report said that the company could even be forced to remove its shares from the US stock market.

It added that the punishment is likely to be more serious than a fine imposed on Chinese e-commerce giant Alibaba earlier this year.

Alibaba accepted a record $2.8bn fine after an official investigation found that it had abused its market position for years.

Didi says China app removal will affect business
In early July, the Cyberspace Administration of China (CAC) ordered online stores not to offer Didi’s app, saying it illegally collected users’ personal data.

That sent Didi’s share price sharply lower and it has now fallen by more than 27% since making its New York Stock Exchange debut on 30 June.

Didi did not immediately respond to a request for comment from the BBC.

China’s major internet firms have come under increasing scrutiny from Beijing this year.

China’s internet watchdog this week ordered some of the country’s biggest online platforms to remove inappropriate child-related content.

The CAC said Kuaishou, Tencent’s messaging tool QQ, Alibaba’s Taobao and Weibo were fined and told to “rectify” and “clean up” all illegal content.

China’s tech giants fall under regulator’s pressure
Another government agency fined 12 companies over deals that violated anti-monopoly rules.

The companies included Tencent, Baidu, Didi, SoftBank and a ByteDance-backed firm, the State Administration for Market Regulation (SAMR) said in a statement.

According to state broadcaster CCTV, President Xi Jinping has ordered regulators to step up their oversight of internet companies, crack down on monopolies and promote fair competition.

British man arrested in Spain over Twitter hack

Spanish police have arrested a 22-year-old UK citizen in connection with the hacking of 130 high-profile Twitter accounts, including those of Elon Musk, Barack Obama and Kanye West.

The hacked accounts tweeted followers, encouraging them to join a Bitcoin scam, in July last year.

Also charged with hacking TikTok and Snapchat, Joseph O’Connor faces charges including three counts of conspiracy to intentionally access a computer without authorisation and obtaining information from a protected computer.

Twitter hack: 130 accounts targeted in attack
Teen ‘mastermind’ pleads guilty to celeb Twitter hack
Twitter hack: What went wrong and why it matters
The San Francisco Division of the Federal Bureau of Investigation (FBI) has been investigating the case, helped by the Internal Revenue Service Criminal Investigation Cyber Crimes Unit and the United States Secret Service.

Previously, authorities had charged three men in relation to the breach, a 19-year-old, from Bognor Regis, another teenager and a 22-year-old from Florida.

Hackers took control of public figures’ accounts and sent a series of tweets asking followers to transfer cryptocurrency to a specific Bitcoin wallet to receive double the money in return.

As a result, Twitter had to stop all verified accounts from tweeting.

The social-media company later said the hackers had targeted Twitter employees to steal credentials to access the systems.

Biden rows back on Facebook ‘killing people’ comment

US President Joe Biden has issued a statement clarifying that “Facebook isn’t killing people”, following his earlier criticism.

The president had said “they’re killing people” when asked about Facebook’s role in the Covid pandemic, a comment which made headlines around the world.

But he now says he was referring to leading misinformation spreaders on the platform.

Facebook had fiercely denied any responsibility.

President Biden’s initial remarks were off-the-cuff comments to a reporter, who’d asked about his message to “platforms like Facebook”.

“They’re killing people,” he said. “The only pandemic we have is among the un-vaccinated. And they’re killing people.”

It came on the day the White House press secretary had accused Facebook of not doing enough to combat the spread of misinformation by users.

Mr Biden now says this is what he was referring to – specifically to a recent report about 12 people credited with spreading a vast amount of misinformation.

“Facebook isn’t killing people, these 12 people are out there giving misinformation,” he said. “Anyone listening to it is getting hurt by it. It’s killing people.

“My hope is that Facebook, instead of taking it personally, that somehow I’m saying ‘Facebook is killing people’, that they would do something about the misinformation, the outrageous misinformation about the vaccine,” he said.

“That’s what I meant.”

Mr Biden’s initial quote last week was picked up by news outlets around the world, leading to an unusually strong rebuttal from Facebook.

“We will not be distracted by accusations which aren’t supported by facts,” it said.

“The facts show that Facebook is helping save lives. Period.”

Misinformation on Facebook killing people – Biden
Facebook moderator: ‘Every day was a nightmare’
Facebook adds ‘expert’ feature to groups
The company went as far as releasing a blog post, called Moving Past the Finger Pointing, claiming that Facebook users are more likely to be vaccinated.

“The data shows that 85% of Facebook users in the US have been or want to be vaccinated against Covid-19. President Biden’s goal was for 70% of Americans to be vaccinated by July 4,” it wrote.

“Facebook is not the reason this goal was missed.”

Facebook continues to face accusations of not doing enough to tackle misinformation.

The report cited by President Biden, originally released in March, suggested that 65% of anti-vaccine posts came from the so-called “disinformation dozen” – 12 people who spread misinformation to millions of other users.

The Center for Countering Digital Hatred (CCDH), which was behind the report, continues to campaign for their removal from both Facebook and Twitter.

AI narration of chef Anthony Bourdain’s voice sparks row

A new documentary about Anthony Bourdain has ignited a debate, after film-makers revealed they had used an AI simulation of the late chef’s voice.

Roadrunner: A Film About Anthony Bourdain was narrated using archive material supplemented by a synthetic voice reading short extracts of writing by Mr Bourdain, who died in 2018.

Director Morgan Neville called it a modern storytelling technique.

But some critics questioned whether it was ethical.

Mr Neville said that the synthetic voice was created by feeding more than 10 hours of Mr Bourdain’s voice into a machine-learning system.

“There were a few sentences that [Bourdain] wrote that he never spoke aloud,” he told Variety.

So the computerised voice was used to bring his writing to life.

He said the technique was used in the film with the support of Mr Bourdain’s estate and literary agent.

Mr Bourdain, who took his own life in 2018, was one of America’s best-known celebrity chefs, presenting food and travel programmes and writing a number of best-selling books.

In 2016, he shared a $6 (£4.30) meal with Barack Obama in a small Hanoi restaurant when the then-US president visited Vietnam.

Writing about the artificial voice in the film, the New Yorker’s Helen Rosner noted that the “seamlessness of the effect is eerie”.

Reviewer Sean Burns criticised the unannounced use of what he called a “deepfake” voice.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
David Leslie, ethics lead at the Alan Turing Institute, said the issue showed the importance of informing audiences that AI was being used, to prevent people possibly feeling deceived or manipulated.

But he said that although it was a complex issue, the use of AI technology in documentaries should not be ruled out.

“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” he said.

The BBC has approached the film-makers for comment.

AI narration of chef Anthony Bourdain’s voice sparks row

A new documentary about Anthony Bourdain has ignited a debate, after film-makers revealed they had used an AI simulation of the late chef’s voice.

Roadrunner: A Film About Anthony Bourdain was narrated using archive material supplemented by a synthetic voice reading short extracts of writing by Mr Bourdain, who died in 2018.

Director Morgan Neville called it a modern storytelling technique.

But some critics questioned whether it was ethical.

Mr Neville said that the synthetic voice was created by feeding more than 10 hours of Mr Bourdain’s voice into a machine-learning system.

“There were a few sentences that [Bourdain] wrote that he never spoke aloud,” he told Variety.

So the computerised voice was used to bring his writing to life.

He said the technique was used in the film with the support of Mr Bourdain’s estate and literary agent.

Mr Bourdain, who took his own life in 2018, was one of America’s best-known celebrity chefs, presenting food and travel programmes and writing a number of best-selling books.

In 2016, he shared a $6 (£4.30) meal with Barack Obama in a small Hanoi restaurant when the then-US president visited Vietnam.

Writing about the artificial voice in the film, the New Yorker’s Helen Rosner noted that the “seamlessness of the effect is eerie”.

Reviewer Sean Burns criticised the unannounced use of what he called a “deepfake” voice.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
David Leslie, ethics lead at the Alan Turing Institute, said the issue showed the importance of informing audiences that AI was being used, to prevent people possibly feeling deceived or manipulated.

But he said that although it was a complex issue, the use of AI technology in documentaries should not be ruled out.

“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” he said.

The BBC has approached the film-makers for comment.

Cambridge data shows Bitcoin mining on the move

New data shows Bitcoin mining in China was already in sharp decline before the latest crackdown by the government.

The research by the Cambridge Centre for Alternative Finance (CCAF) found China’s share of mining fell from 75.5% in September 2019 to 46% in April 2021.

It also revealed Kazakhstan was now the third most significant Bitcoin mining nation.

Miners earn money by creating new Bitcoins, but the computing used consumes large amounts of energy.

They audit Bitcoin transactions in exchange for an opportunity to acquire the digital currency.

Global mining requires enormous computing power, which in turn uses huge amounts of electricity, and consequently contributes significantly to global emissions.

The CCAF’s Cambridge Bitcoin Electricity Consumption Index shows that at time of writing Bitcoin consumed almost as much electricity annually as Colombia.

China moves
In June the Chinese authorities took strong action against Bitcoin.

The authorities told banks and payments platforms to stop supporting digital currency transactions causing prices to tumble.

The data from the CCAF covers a period before the crackdown, but it shows China’s share of global mining power was already in significant decline prior to the action by the Chinese authorities.

The Cambridge researchers observed that the crackdown, once enacted, effectively led to all of China’s mining power “disappearing overnight, suggesting that miners and their equipment are on the move”.

Experts say the miners are highly mobile.

“Miners pack shipping containers with mining rigs”, said David Gerard, author of Attack Of The 50 Foot Blockchain, “so that in effect they are mobile computer data centres, and they are now trying to ship those out of China”.

It’s not clear where they will go, but even before the crackdown the geography of mining was shifting.

Kazakhstan, a country rich in fossil fuels, saw an almost six-fold increase in mining – increasing its share from 1.4% in September 2019 to 8.2% in April 2021.

According to the US Department of Commerce, 87% of Kazakhstan’s electricity “is generated from fossil fuels” with coal accounting for more than 70% of generation.

The country is now the third largest miner of Bitcoins, behind the US, which saw its share of global mining power also rise significantly – to 16.8%.

Raining money
The data also revealed the close ties between sources of cheap electricity and Bitcoin mining.

Researchers found a seasonal movement of mining between Chinese provinces in response, it was suggested, to the availability of hydro-electric power.

Mining moved from the coal-burning northern province of Xinjiang in the dry season, to the hydro-abundant southern province of Sichuan in the rainy season.

The researchers noted that “this seasonal migration has materially affected the energy profile of Bitcoin mining in China”, adding that it illustrated “the complexity of assessing the environmental effects of mining”.

Sichuan banned Bitcoin mining in June.

NHS Covid app: Should it stay or should it go?

There is a battle over the future of the NHS Test and Trace app. Over the next few days it could either be effectively ditched as a key weapon in the battle against Covid-19, or become even more important as cases of the virus multiply.

A new poll suggests one in five people have deleted the app already. Ministers are suggesting it is sending out too many “pings” and its sensitivity may need to be adjusted.

On the other hand, the scientists and software developers working on the app are adamant that it is performing well, doing just what it was designed for.

Switching off
The poll – by Savanta ComRes – for the Guardian, suggests young people in particular are giving up on the app, with a third of 18-34-year-old users saying they’ve already deleted it – and a third of the rest planning to do so over the next six days.

Stories such as the chaos at Heathrow when over 100 security staff were “pinged” into isolation can’t have helped, and with restrictions in England being lifted on Monday, it is understandable that many will want to avert any threat to their freedom.

A new update to the app on Monday will feature language stressing its advisory, rather than compulsory, nature.

Covid app settings changing to send fewer alerts
How many people are deleting the Covid app?
What happens if I’m ‘pinged’ by the app?
Since nobody knows who has been pinged due to the privacy built into the system, that has always been the case. But making it clearer could send the message to many that they can simply ignore any alerts.

There have been discussions also about the sensitivity of the app and whether it should be tweaked so that it sends out fewer alerts. That will be a decision for the new Health Secretary Sajid Javid.

But the epidemiologists who advise him appear confident that the app is working – and perhaps better than ever before.

Better than ever?
They point to a peer-reviewed paper in the journal Nature which showed that last autumn the app averted as many as 600,000 cases and could have saved up to 8,000 lives.

The scientists now believe that the app is only sending out so many pings because there has been a surge in cases – and so many people are coming into close contact as restrictions are relaxed.

In other words – it is working just as intended.

At the heart of this battle, there are plenty of misunderstandings about how the app and the wider manual contact tracing effort work.

The aim of contact tracing is to take people who might be infected with the virus after a close contact “out of circulation” before they can pass it on to others. Note the word “might” – the majority of those sent into isolation won’t turn out to be infected.

But if even a few are, then their isolation can help in the battle to stop the spread of the virus.

In that Nature paper, a key number leaps out. The SAR – the secondary attack rate – is the percentage of those sent into isolation after a close contact with someone infected, who then go on to test positive for the virus during their isolation period. Last autumn it was 6% for the app – which sounds low, but is roughly comparable to the rate for those phoned up and told to isolate by the manual, human-operated Test and Trace operation.

We know that the technology behind the app is far from perfect, but it appears to be about as reliable as asking people to remember their close contacts from the previous week.

A grand test
The other misunderstanding is a tendency to blame the app for issues that are about wider health policy.

“Why is it sending me into isolation when I’ve had a negative test and have been double jabbed?” Well that’s the policy set for the manual tracing operation too, and this will change in mid-August when the app will ask you whether you have been fully vaccinated, and then just advise a test rather than isolation.

In the end, the app has been a giant experiment – not just in technology, but in human behaviour.

How will people react when an app notification rather than a phone call tells them to isolate? Does the theatre of scanning a QR code to register at a cafe encourage use of the app – and will people switch it off when that is no longer required?

The problem is that how the app works is pretty complicated – and the government has not always done a great job of explaining it.

If the war over its future ends in defeat for its supporters, a poor communications strategy will be partly to blame.

Google fined €500m by French competition authority

Google has been hit with a €500m (£427m) fine by France’s competition authority for failing to negotiate “in good faith” with news organisations over the use of their content.

The authority accused Google of not taking an order to do so seriously.

Google told the BBC the decision “ignores our efforts to reach an agreement”.

The fine is the latest skirmish in a global copyright battle between tech firms and news organisations.

Last year, the French competition authority ordered that Google must negotiate deals with news organisations to show extracts of articles in search results, news and other services.

Google was fined because, in the authority’s view, it failed to do this.

In 2019, France became the first EU country to transpose a new Digital Copyright Directive into law.

The law governed so-called “neighbouring rights” which are designed to compensate publishers and news agencies for the use of their material.

As a result, Google decided it would not show content from EU publishers in France, on services like search and news, unless publishers agreed to let them do so free of charge.

News organisations felt this was an abuse of Google’s market power, and two organisations representing press publishers and Agence France-Presse (AFP) complained to the competition authority.

Google told the BBC: “We are very disappointed with this decision – we have acted in good faith throughout the entire process.”

It said that it is, to date, the only company to have announced agreements on so-called neighbouring rights.

It added it was about to finalise an agreement with AFP that includes a global licensing agreement and payments for press publications.

The new ruling means that within the next two months Google must come up with proposals explaining how it will recompense companies for the use of their news.

Should this fail to happen the company could face additional fines of €900,000 per day.

“When the authority decrees an obligation for a company, it must comply scrupulously,” said the competition authority’s Isabelle de Silva in a statement.

“Here, this was unfortunately not the case,” she wrote.

Euro 2020: Why abuse remains rife on social media

Once again, social-media companies are facing criticism as their platforms are used to racially abuse football players, following the dramatic conclusion of the Euro 2020 men’s tournament on Sunday night.

And make no mistake, it is becoming increasingly difficult for the technology giants to defend these incidents. How can they not be held more responsible for the content they share to millions?

In a nutshell, social-media platforms have not been regulated in the same way as traditional media such as the BBC, because they are not categorised as publishers or broadcasters.

If racist comments appeared below this article, written not by me but by someone who had read it, the BBC would be immediately accountable, as part of an arrangement with the regulator Ofcom.

But Ofcom does not yet have powers over the likes of Facebook, TikTok, YouTube and Twitter, which have until now been largely self-regulating – although, that is coming, as part of the long-anticipated Online Safety Bill.

Whether the threat of large fines is enough to focus the minds of these multi-million dollar businesses remains to be seen, however. But it is not just in the UK that regulation is planned.

In fairness, while the BBC does have a large global presence, it does not have to deal with anything like the volume of content and video, written and uploaded in real time by anybody and everybody, a platform such as Facebook, with its two billion users, does.

This sheer volume swamps the armies of human moderators employed by those platforms.

Some describe nightmare shifts sifting through the worst and most graphic content imaginable and then making decisions about what should be done with it.

So, the solution these companies are all pouring countless time and money into is automation.

Algorithms trained to seek out offensive material before it is published, the blanket banning of incendiary (and illegal) hashtags, the use of techniques such as “hashing”, which create a kind of digital fingerprint of a video, and then block any content bearing the same marker, are already in regular use.

But so far, automation remains a bit of a blunt instrument.

It is not yet sophisticated enough to understand nuance, context, different cultural norms, crafty edits – and there would be very little support for an anonymous algorithm, built by an unelected US technology giant, effectively censoring Western people’s speech without these factors (China of course has its own state censorship and US social-media platforms are banned).

Here is an example – a friend last night reported an anonymous account that had posted an orangutan emoji beneath an Instagram post belonging to England’s Bukayo Saka.

Now, nobody is going to blanket-ban that emoji.

But in this context, and given Saka is a young black player whose penalty kick was one of those that failed in the deciding moments of the England v Italy Euro 2020 final, the intention is pretty clear.

The response my friend received to her complaint was: “Our technology has found that this probably doesn’t go against our community guidelines,” – although, it went on to add the automation “isn’t perfect”.

She has appealed against the decision but who knows when it will get a pair of human eyes on it?

‘Quickly removed’
On Twitter, meanwhile, a user was apparently able to use an extremely offensive racial slur in a tweet about a footballer, before deleting the message.

In a statement, Facebook, which owns Instagram, said it had “quickly removed comments” directed at players.

“No-one should have to experience racist abuse anywhere – and we don’t want it on Instagram,” it said.

Twitter’s response was similar – in 24 hours, it had removed 1,000 posts and blocked accounts sharing hateful content, using “a combination of machine learning based automation and human review”, it said.

Human nature
Both companies also said they had tools and filters that could be activated to stop account holders from seeing abusive content – but this does not solve the problem of the content being shared in the first place.

“No one thing will fix this challenge overnight,” Facebook added.

And perhaps that is what really lies at the heart of all this – it is not necessarily the limitations of technology that are the issue but old-fashioned human nature.

Prime Minister Boris Johnson said, on Twitter ironically, those responsible for posting racist abuse should “be ashamed of themselves”.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
But for some people, there is something seemingly irresistible about hiding behind a keyboard and saying things they would in all likelihood never say out loud, in person.

Never before have people had access not only to those they want to berate but also a potentially enormous audience of those who will listen – and join in.

For some, it has proved a heady and intoxicating mix – and one that has quickly established itself as the norm.

I was talking to someone recently about something controversial that happened 20 years ago.

Her first question was about how social-media had responded.

I hesitated for a moment, wondering why I could not remember that detail – and then realised it was simply because it did not exist.