British man arrested in Spain over Twitter hack

Spanish police have arrested a 22-year-old UK citizen in connection with the hacking of 130 high-profile Twitter accounts, including those of Elon Musk, Barack Obama and Kanye West.

The hacked accounts tweeted followers, encouraging them to join a Bitcoin scam, in July last year.

Also charged with hacking TikTok and Snapchat, Joseph O’Connor faces charges including three counts of conspiracy to intentionally access a computer without authorisation and obtaining information from a protected computer.

Twitter hack: 130 accounts targeted in attack
Teen ‘mastermind’ pleads guilty to celeb Twitter hack
Twitter hack: What went wrong and why it matters
The San Francisco Division of the Federal Bureau of Investigation (FBI) has been investigating the case, helped by the Internal Revenue Service Criminal Investigation Cyber Crimes Unit and the United States Secret Service.

Previously, authorities had charged three men in relation to the breach, a 19-year-old, from Bognor Regis, another teenager and a 22-year-old from Florida.

Hackers took control of public figures’ accounts and sent a series of tweets asking followers to transfer cryptocurrency to a specific Bitcoin wallet to receive double the money in return.

As a result, Twitter had to stop all verified accounts from tweeting.

The social-media company later said the hackers had targeted Twitter employees to steal credentials to access the systems.

Biden rows back on Facebook ‘killing people’ comment

US President Joe Biden has issued a statement clarifying that “Facebook isn’t killing people”, following his earlier criticism.

The president had said “they’re killing people” when asked about Facebook’s role in the Covid pandemic, a comment which made headlines around the world.

But he now says he was referring to leading misinformation spreaders on the platform.

Facebook had fiercely denied any responsibility.

President Biden’s initial remarks were off-the-cuff comments to a reporter, who’d asked about his message to “platforms like Facebook”.

“They’re killing people,” he said. “The only pandemic we have is among the un-vaccinated. And they’re killing people.”

It came on the day the White House press secretary had accused Facebook of not doing enough to combat the spread of misinformation by users.

Mr Biden now says this is what he was referring to – specifically to a recent report about 12 people credited with spreading a vast amount of misinformation.

“Facebook isn’t killing people, these 12 people are out there giving misinformation,” he said. “Anyone listening to it is getting hurt by it. It’s killing people.

“My hope is that Facebook, instead of taking it personally, that somehow I’m saying ‘Facebook is killing people’, that they would do something about the misinformation, the outrageous misinformation about the vaccine,” he said.

“That’s what I meant.”

Mr Biden’s initial quote last week was picked up by news outlets around the world, leading to an unusually strong rebuttal from Facebook.

“We will not be distracted by accusations which aren’t supported by facts,” it said.

“The facts show that Facebook is helping save lives. Period.”

Misinformation on Facebook killing people – Biden
Facebook moderator: ‘Every day was a nightmare’
Facebook adds ‘expert’ feature to groups
The company went as far as releasing a blog post, called Moving Past the Finger Pointing, claiming that Facebook users are more likely to be vaccinated.

“The data shows that 85% of Facebook users in the US have been or want to be vaccinated against Covid-19. President Biden’s goal was for 70% of Americans to be vaccinated by July 4,” it wrote.

“Facebook is not the reason this goal was missed.”

Facebook continues to face accusations of not doing enough to tackle misinformation.

The report cited by President Biden, originally released in March, suggested that 65% of anti-vaccine posts came from the so-called “disinformation dozen” – 12 people who spread misinformation to millions of other users.

The Center for Countering Digital Hatred (CCDH), which was behind the report, continues to campaign for their removal from both Facebook and Twitter.

AI narration of chef Anthony Bourdain’s voice sparks row

A new documentary about Anthony Bourdain has ignited a debate, after film-makers revealed they had used an AI simulation of the late chef’s voice.

Roadrunner: A Film About Anthony Bourdain was narrated using archive material supplemented by a synthetic voice reading short extracts of writing by Mr Bourdain, who died in 2018.

Director Morgan Neville called it a modern storytelling technique.

But some critics questioned whether it was ethical.

Mr Neville said that the synthetic voice was created by feeding more than 10 hours of Mr Bourdain’s voice into a machine-learning system.

“There were a few sentences that [Bourdain] wrote that he never spoke aloud,” he told Variety.

So the computerised voice was used to bring his writing to life.

He said the technique was used in the film with the support of Mr Bourdain’s estate and literary agent.

Mr Bourdain, who took his own life in 2018, was one of America’s best-known celebrity chefs, presenting food and travel programmes and writing a number of best-selling books.

In 2016, he shared a $6 (£4.30) meal with Barack Obama in a small Hanoi restaurant when the then-US president visited Vietnam.

Writing about the artificial voice in the film, the New Yorker’s Helen Rosner noted that the “seamlessness of the effect is eerie”.

Reviewer Sean Burns criticised the unannounced use of what he called a “deepfake” voice.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
David Leslie, ethics lead at the Alan Turing Institute, said the issue showed the importance of informing audiences that AI was being used, to prevent people possibly feeling deceived or manipulated.

But he said that although it was a complex issue, the use of AI technology in documentaries should not be ruled out.

“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” he said.

The BBC has approached the film-makers for comment.

AI narration of chef Anthony Bourdain’s voice sparks row

A new documentary about Anthony Bourdain has ignited a debate, after film-makers revealed they had used an AI simulation of the late chef’s voice.

Roadrunner: A Film About Anthony Bourdain was narrated using archive material supplemented by a synthetic voice reading short extracts of writing by Mr Bourdain, who died in 2018.

Director Morgan Neville called it a modern storytelling technique.

But some critics questioned whether it was ethical.

Mr Neville said that the synthetic voice was created by feeding more than 10 hours of Mr Bourdain’s voice into a machine-learning system.

“There were a few sentences that [Bourdain] wrote that he never spoke aloud,” he told Variety.

So the computerised voice was used to bring his writing to life.

He said the technique was used in the film with the support of Mr Bourdain’s estate and literary agent.

Mr Bourdain, who took his own life in 2018, was one of America’s best-known celebrity chefs, presenting food and travel programmes and writing a number of best-selling books.

In 2016, he shared a $6 (£4.30) meal with Barack Obama in a small Hanoi restaurant when the then-US president visited Vietnam.

Writing about the artificial voice in the film, the New Yorker’s Helen Rosner noted that the “seamlessness of the effect is eerie”.

Reviewer Sean Burns criticised the unannounced use of what he called a “deepfake” voice.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
David Leslie, ethics lead at the Alan Turing Institute, said the issue showed the importance of informing audiences that AI was being used, to prevent people possibly feeling deceived or manipulated.

But he said that although it was a complex issue, the use of AI technology in documentaries should not be ruled out.

“In a world where the living could consent to using AI to reproduce their voices posthumously, and where people were made aware that such a technology was being used, up front and in advance, one could envision that this kind of application might serve useful documentary purposes,” he said.

The BBC has approached the film-makers for comment.

Cambridge data shows Bitcoin mining on the move

New data shows Bitcoin mining in China was already in sharp decline before the latest crackdown by the government.

The research by the Cambridge Centre for Alternative Finance (CCAF) found China’s share of mining fell from 75.5% in September 2019 to 46% in April 2021.

It also revealed Kazakhstan was now the third most significant Bitcoin mining nation.

Miners earn money by creating new Bitcoins, but the computing used consumes large amounts of energy.

They audit Bitcoin transactions in exchange for an opportunity to acquire the digital currency.

Global mining requires enormous computing power, which in turn uses huge amounts of electricity, and consequently contributes significantly to global emissions.

The CCAF’s Cambridge Bitcoin Electricity Consumption Index shows that at time of writing Bitcoin consumed almost as much electricity annually as Colombia.

China moves
In June the Chinese authorities took strong action against Bitcoin.

The authorities told banks and payments platforms to stop supporting digital currency transactions causing prices to tumble.

The data from the CCAF covers a period before the crackdown, but it shows China’s share of global mining power was already in significant decline prior to the action by the Chinese authorities.

The Cambridge researchers observed that the crackdown, once enacted, effectively led to all of China’s mining power “disappearing overnight, suggesting that miners and their equipment are on the move”.

Experts say the miners are highly mobile.

“Miners pack shipping containers with mining rigs”, said David Gerard, author of Attack Of The 50 Foot Blockchain, “so that in effect they are mobile computer data centres, and they are now trying to ship those out of China”.

It’s not clear where they will go, but even before the crackdown the geography of mining was shifting.

Kazakhstan, a country rich in fossil fuels, saw an almost six-fold increase in mining – increasing its share from 1.4% in September 2019 to 8.2% in April 2021.

According to the US Department of Commerce, 87% of Kazakhstan’s electricity “is generated from fossil fuels” with coal accounting for more than 70% of generation.

The country is now the third largest miner of Bitcoins, behind the US, which saw its share of global mining power also rise significantly – to 16.8%.

Raining money
The data also revealed the close ties between sources of cheap electricity and Bitcoin mining.

Researchers found a seasonal movement of mining between Chinese provinces in response, it was suggested, to the availability of hydro-electric power.

Mining moved from the coal-burning northern province of Xinjiang in the dry season, to the hydro-abundant southern province of Sichuan in the rainy season.

The researchers noted that “this seasonal migration has materially affected the energy profile of Bitcoin mining in China”, adding that it illustrated “the complexity of assessing the environmental effects of mining”.

Sichuan banned Bitcoin mining in June.

NHS Covid app: Should it stay or should it go?

There is a battle over the future of the NHS Test and Trace app. Over the next few days it could either be effectively ditched as a key weapon in the battle against Covid-19, or become even more important as cases of the virus multiply.

A new poll suggests one in five people have deleted the app already. Ministers are suggesting it is sending out too many “pings” and its sensitivity may need to be adjusted.

On the other hand, the scientists and software developers working on the app are adamant that it is performing well, doing just what it was designed for.

Switching off
The poll – by Savanta ComRes – for the Guardian, suggests young people in particular are giving up on the app, with a third of 18-34-year-old users saying they’ve already deleted it – and a third of the rest planning to do so over the next six days.

Stories such as the chaos at Heathrow when over 100 security staff were “pinged” into isolation can’t have helped, and with restrictions in England being lifted on Monday, it is understandable that many will want to avert any threat to their freedom.

A new update to the app on Monday will feature language stressing its advisory, rather than compulsory, nature.

Covid app settings changing to send fewer alerts
How many people are deleting the Covid app?
What happens if I’m ‘pinged’ by the app?
Since nobody knows who has been pinged due to the privacy built into the system, that has always been the case. But making it clearer could send the message to many that they can simply ignore any alerts.

There have been discussions also about the sensitivity of the app and whether it should be tweaked so that it sends out fewer alerts. That will be a decision for the new Health Secretary Sajid Javid.

But the epidemiologists who advise him appear confident that the app is working – and perhaps better than ever before.

Better than ever?
They point to a peer-reviewed paper in the journal Nature which showed that last autumn the app averted as many as 600,000 cases and could have saved up to 8,000 lives.

The scientists now believe that the app is only sending out so many pings because there has been a surge in cases – and so many people are coming into close contact as restrictions are relaxed.

In other words – it is working just as intended.

At the heart of this battle, there are plenty of misunderstandings about how the app and the wider manual contact tracing effort work.

The aim of contact tracing is to take people who might be infected with the virus after a close contact “out of circulation” before they can pass it on to others. Note the word “might” – the majority of those sent into isolation won’t turn out to be infected.

But if even a few are, then their isolation can help in the battle to stop the spread of the virus.

In that Nature paper, a key number leaps out. The SAR – the secondary attack rate – is the percentage of those sent into isolation after a close contact with someone infected, who then go on to test positive for the virus during their isolation period. Last autumn it was 6% for the app – which sounds low, but is roughly comparable to the rate for those phoned up and told to isolate by the manual, human-operated Test and Trace operation.

We know that the technology behind the app is far from perfect, but it appears to be about as reliable as asking people to remember their close contacts from the previous week.

A grand test
The other misunderstanding is a tendency to blame the app for issues that are about wider health policy.

“Why is it sending me into isolation when I’ve had a negative test and have been double jabbed?” Well that’s the policy set for the manual tracing operation too, and this will change in mid-August when the app will ask you whether you have been fully vaccinated, and then just advise a test rather than isolation.

In the end, the app has been a giant experiment – not just in technology, but in human behaviour.

How will people react when an app notification rather than a phone call tells them to isolate? Does the theatre of scanning a QR code to register at a cafe encourage use of the app – and will people switch it off when that is no longer required?

The problem is that how the app works is pretty complicated – and the government has not always done a great job of explaining it.

If the war over its future ends in defeat for its supporters, a poor communications strategy will be partly to blame.

Google fined €500m by French competition authority

Google has been hit with a €500m (£427m) fine by France’s competition authority for failing to negotiate “in good faith” with news organisations over the use of their content.

The authority accused Google of not taking an order to do so seriously.

Google told the BBC the decision “ignores our efforts to reach an agreement”.

The fine is the latest skirmish in a global copyright battle between tech firms and news organisations.

Last year, the French competition authority ordered that Google must negotiate deals with news organisations to show extracts of articles in search results, news and other services.

Google was fined because, in the authority’s view, it failed to do this.

In 2019, France became the first EU country to transpose a new Digital Copyright Directive into law.

The law governed so-called “neighbouring rights” which are designed to compensate publishers and news agencies for the use of their material.

As a result, Google decided it would not show content from EU publishers in France, on services like search and news, unless publishers agreed to let them do so free of charge.

News organisations felt this was an abuse of Google’s market power, and two organisations representing press publishers and Agence France-Presse (AFP) complained to the competition authority.

Google told the BBC: “We are very disappointed with this decision – we have acted in good faith throughout the entire process.”

It said that it is, to date, the only company to have announced agreements on so-called neighbouring rights.

It added it was about to finalise an agreement with AFP that includes a global licensing agreement and payments for press publications.

The new ruling means that within the next two months Google must come up with proposals explaining how it will recompense companies for the use of their news.

Should this fail to happen the company could face additional fines of €900,000 per day.

“When the authority decrees an obligation for a company, it must comply scrupulously,” said the competition authority’s Isabelle de Silva in a statement.

“Here, this was unfortunately not the case,” she wrote.

Euro 2020: Why abuse remains rife on social media

Once again, social-media companies are facing criticism as their platforms are used to racially abuse football players, following the dramatic conclusion of the Euro 2020 men’s tournament on Sunday night.

And make no mistake, it is becoming increasingly difficult for the technology giants to defend these incidents. How can they not be held more responsible for the content they share to millions?

In a nutshell, social-media platforms have not been regulated in the same way as traditional media such as the BBC, because they are not categorised as publishers or broadcasters.

If racist comments appeared below this article, written not by me but by someone who had read it, the BBC would be immediately accountable, as part of an arrangement with the regulator Ofcom.

But Ofcom does not yet have powers over the likes of Facebook, TikTok, YouTube and Twitter, which have until now been largely self-regulating – although, that is coming, as part of the long-anticipated Online Safety Bill.

Whether the threat of large fines is enough to focus the minds of these multi-million dollar businesses remains to be seen, however. But it is not just in the UK that regulation is planned.

In fairness, while the BBC does have a large global presence, it does not have to deal with anything like the volume of content and video, written and uploaded in real time by anybody and everybody, a platform such as Facebook, with its two billion users, does.

This sheer volume swamps the armies of human moderators employed by those platforms.

Some describe nightmare shifts sifting through the worst and most graphic content imaginable and then making decisions about what should be done with it.

So, the solution these companies are all pouring countless time and money into is automation.

Algorithms trained to seek out offensive material before it is published, the blanket banning of incendiary (and illegal) hashtags, the use of techniques such as “hashing”, which create a kind of digital fingerprint of a video, and then block any content bearing the same marker, are already in regular use.

But so far, automation remains a bit of a blunt instrument.

It is not yet sophisticated enough to understand nuance, context, different cultural norms, crafty edits – and there would be very little support for an anonymous algorithm, built by an unelected US technology giant, effectively censoring Western people’s speech without these factors (China of course has its own state censorship and US social-media platforms are banned).

Here is an example – a friend last night reported an anonymous account that had posted an orangutan emoji beneath an Instagram post belonging to England’s Bukayo Saka.

Now, nobody is going to blanket-ban that emoji.

But in this context, and given Saka is a young black player whose penalty kick was one of those that failed in the deciding moments of the England v Italy Euro 2020 final, the intention is pretty clear.

The response my friend received to her complaint was: “Our technology has found that this probably doesn’t go against our community guidelines,” – although, it went on to add the automation “isn’t perfect”.

She has appealed against the decision but who knows when it will get a pair of human eyes on it?

‘Quickly removed’
On Twitter, meanwhile, a user was apparently able to use an extremely offensive racial slur in a tweet about a footballer, before deleting the message.

In a statement, Facebook, which owns Instagram, said it had “quickly removed comments” directed at players.

“No-one should have to experience racist abuse anywhere – and we don’t want it on Instagram,” it said.

Twitter’s response was similar – in 24 hours, it had removed 1,000 posts and blocked accounts sharing hateful content, using “a combination of machine learning based automation and human review”, it said.

Human nature
Both companies also said they had tools and filters that could be activated to stop account holders from seeing abusive content – but this does not solve the problem of the content being shared in the first place.

“No one thing will fix this challenge overnight,” Facebook added.

And perhaps that is what really lies at the heart of all this – it is not necessarily the limitations of technology that are the issue but old-fashioned human nature.

Prime Minister Boris Johnson said, on Twitter ironically, those responsible for posting racist abuse should “be ashamed of themselves”.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
But for some people, there is something seemingly irresistible about hiding behind a keyboard and saying things they would in all likelihood never say out loud, in person.

Never before have people had access not only to those they want to berate but also a potentially enormous audience of those who will listen – and join in.

For some, it has proved a heady and intoxicating mix – and one that has quickly established itself as the norm.

I was talking to someone recently about something controversial that happened 20 years ago.

Her first question was about how social-media had responded.

I hesitated for a moment, wondering why I could not remember that detail – and then realised it was simply because it did not exist.

Google boss Sundar Pichai warns of threats to internet freedom

The free and open internet is under attack in countries around the world, Google boss Sundar Pichai has warned.

He says many countries are restricting the flow of information, and the model is often taken for granted.

In an in-depth interview with the BBC, Pichai also addresses controversies around tax, privacy and data.

And he argues artificial intelligence is more profound than fire, electricity or the internet.

Pichai is chief executive of one of the most complex, consequential and rich institutions in history.

The next revolutions
I spoke to him at Google’s HQ in Silicon Valley, for the first of a series of interviews I am doing for the BBC with global figures.

As boss of both Google and its parent company Alphabet, he is the ultimate leader of companies or products as varied as Waze, FitBit and DeepMind, the artificial intelligence pioneers. At Google alone he oversees Gmail, Google Chrome, Google Maps, Google Earth, Google Docs, Google Photos, the Android operating system and many other products.

But by far the most familiar is Google Search. It’s even become its own verb: to Google.

Over the past 23 years, Google has probably shaped the mostly free and open internet we have today more than any other company.

According to Pichai, over the next quarter of a century, two other developments will further revolutionise our world: artificial intelligence and quantum computing. Amid the rustling leaves and sunshine of the vast, empty campus that is Google’s HQ in Silicon Valley, Pichai stressed how consequential AI was going to be.

“I view it as the most profound technology that humanity will ever develop and work on,” he said. “You know, if you think about fire or electricity or the internet, it’s like that. But I think even more profound.”

Artificial intelligence is, at base, the attempt to replicate human intelligence in machines. Various AI systems are already better at solving particular kinds of problems than humans. For an eloquent exposition of the potential harms from AI, try this essay by Henry Kissinger.

Quantum Computing is a totally different phenomenon. Ordinary computing is based on states of matter that are binary: 0 or 1. Nothing in-between. These positions are called bits.

But at the quantum, or sub-atomic level, matter behaves differently: it can be 0 or 1 at the same time – or on a spectrum between the two. Quantum computers are built on qubits, which factor in the probability of matter being in one of various different states. This is mind-boggling stuff, but it could change the world. Wired has an excellent explainer.

Pichai and other leading technologists find the possibilities here exhilarating. “[Quantum] is not going to work for everything. There are things for which the way we do computing today would always be better. But there are some things for which quantum computing will open up an entire new range of solutions.”

Pichai rose through the ranks of Google by being the most effective, popular and respected product manager in the company’s history.Neither Chrome, the browser, nor Android, the mobile operating system, were his idea (Android was for a while led by Andy Rubin). But Pichai was the product manager who led them, under the watchful eyes of Google’s founders, to global domination.

In a sense, Pichai is now product managing the infinitely greater challenges of AI and quantum computing. He is doing so as Google faces a daily barrage of scrutiny and criticism on several fronts – to name but three: tax, privacy, and alleged monopoly status.

Taxing tech
Google gets defensive on matters relating to tax.

For several years, the company has paid huge sums to accountants and lawyers in order to legally reduce their tax obligations.

For instance, in 2017, Google moved more than $20bn to Bermuda through a Dutch shell company, as part of a strategy called “Double Irish, Dutch Sandwich”.

I put this to Pichai, who said that Google no longer uses this scheme, is one of the world’s biggest taxpayers, and complies with tax laws in every country in which it operates.

I responded that his answer revealed exactly the problem: this isn’t just a legal issue, it’s a moral one. Poor people generally don’t employ accountants in order to minimise their tax bills; large-scale tax avoidance is something that the richest people in the world do, and – I suggested to him – may weaken the collective sacrifice.

When I invited Pichai to commit there and then to Google pulling out of all tax havens immediately, he didn’t take up the offer.

He did, however, make clear that he is “encouraged by the conversations around a corporate global minimum tax”.

It is clear that Google is engaging with policy-makers on finding ways to make tax simpler and more effective. It is true that the company generates most of its research and revenues in the US, which is where it pays most of its tax.

Moreover, it has paid effective tax of 20% over the past decade, which is more than many companies. Nevertheless, any use of any tax haven is a reputational exposure for companies when, across the world, trillions are being borrowed, spent, and raised through taxes on ordinary folk in order to mitigate the pandemic.

The other big issues where Google is facing constant and growing scrutiny surround data, privacy, and whether or not the company has an effective monopoly in Search, where it is totally dominant.

On the last of these, Pichai makes the case that Google is a free product, and users can easily go elsewhere.

This is the same argument that Facebook has used, and Mark Zuckerberg’s company received a strong endorsement from Judge James Boasberg of Washington DC last month, when he rejected a raft of anti-trust cases against the social media giant on the grounds that it didn’t meet the current definition of a monopoly (that is, “the power to profitably raise prices or exclude competition”).

The exchanges on privacy, data, tax and dominance in Search were perhaps the most testy of the time I had with Pichai, and can be heard in the podcast version.

Industry respect
In preparation for the interview, I spoke to more than a dozen current or former Google executives, other senior executives at big tech firms, regulators and tech-sector strategists. There was reliably strong opinion and consensus within each camp.

Those who work in the tech sector said you just cannot argue with the growth in Google’s share price under Pichai. It has nearly tripled. That’s a phenomenal performance. Arguing that it is explained by favourable prevailing winds in consumer behaviour – of the kind that have helped other tech giants to grow – similarly misses the point.

Google created that consumer behaviour with astonishing engineering and world-class products.

Mostly off the record, the regulators said new laws, and language, needed to be designed to exert better scrutiny on this new kind of corporate giant. Judge Boasberg’s verdict on Facebook rather confirmed this. Interestingly, Lina Khan, the new 32-year-old boss of the Federal Trade Commission, has previously argued that the definition of monopoly should be expanded to reflect this new world.

The senior executives at other big tech firms were struck by what an effective public performer Pichai is. His testimonies in Congress have rarely led to drops in Google’s share price. His emollient manner and grasp of detail allows him to draw poison from potentially difficult situations.

A low-profile, avuncular figure, he keeps himself largely to himself – which is partly why Google staff who watch the interview will learn a lot about him (those present said they did).

In a very revealing quick-fire round of questions, we discover he doesn’t eat meat, drives a Tesla, reveres Alan Turing, wishes he’d met Stephen Hawking, and is jealous of Jeff Bezos’s space mission.

It was fascinating to find all this out from such an influential figure, precisely because he doesn’t make too many public pronouncements. You wouldn’t, for instance, find him on Instagram riding an electric hydrofoil surfboard while holding an American flag, on US Independence Day, to the sound of John Denver’s Country Roads (the version by Toots Hibbert is, of course, infinitely better).

Chief ethics officer
It was what I heard from those who worked with or for him, however, that most informed my approach.

Pichai is universally regarded as an exceptionally kind, thoughtful, and caring leader. Considerate toward staff, he is, according to everyone I spoke to who knew him, genuinely committed to being an ethical example. He is an idealist when it comes to the impact of technology on improving living standards, something that has its roots in his upbringing, which we discussed at length.

He was born into a middle-class family in Tamil Nadu, in south India. Various technologies had a transformative impact on him, from the old rotary phone that they were on a waiting list for, to the scooter they all piled on to for a monthly dinner.

At Google, he won over the engineers and software developers. It helped that he was a metallurgical engineer himself, but it’s still not easy; the brains trust at Silicon Valley companies includes many of the biggest egos on the planet. Yet they respect him hugely.

Pichai obeys the counter-cyclical approach to leadership appointments favoured by many head-hunters. After the necessarily pioneering, zealous, risk-taking leadership of founders Larry Page and Sergey Brin, it made sense to have a lower-profile, solid, more cautious leader who would soothe public anxieties and charm public officials.

Pichai has been outstanding at these latter tasks, and the company’s share price performance is remarkable. Not many people in history could say they’ve created a trillion dollars of value as CEO.

But the very qualities that made him a smart counter-cyclical appointment also point to potential pitfalls, according to ex-Googlers and many other close watchers. It’s important to say that these people are generally tech evangelists, who have very different priorities from your average punter.

The tech evangelists are united on a few points.

First, Google is now a more cautious company than it has ever been (Google would of course dispute this, and others would say it would be a good thing if true).

Second, Google has a bunch of “Me-Too” products rather than original ideas; in the sense that it sees other people make great inventions, and then it unleashes its engineers to improve them.

Third, a lot of Pichai’s big bets have failed: Google Glass, Google Plus, Google Wave, Project Loon. Google could reasonably retort that there is value in experimentation and failure. And that this rather conflicts with the first point above.

Fourth, that Google’s ambition to solve humanity’s biggest problems is waning. With the biggest concentration of computer science PhDs in the world in one tiny strip of land south of San Francisco, goes this argument, shouldn’t Google be reversing climate change, or solving cancer? I find this criticism hard to reconcile with Pichai’s record, but it is common.

Finally, that he deserves tremendous sympathy, because managing a staff as big, recalcitrant, demanding and idealistic as Google’s in an era of culture wars is essentially impossible. These days Google is quite frequently in the news because of staff walkouts over diversity or pay; or because key people have left over controversial issues around identity.

With more than 100,000 staff, many of them hugely opinionated on internal message boards, and activist in nature, this is just impossible to control. There is a tension between Google genuinely embracing cognitive diversity by having people of all persuasions among its global staff, and at the same time really standing up for particular issues as a company.

Acceleration
All the above are concerns of people within the tech world who want Google to go faster. A lot of voters in polarised democracies would like big tech to slow down.

The most obvious lesson I draw from my time in Silicon Valley is that there is no chance of that happening. Acceleration is the norm: the speeding up of history is itself speeding up.

And, when I asked about whether the Chinese model of the internet – much more authoritarian, big on surveillance – is in the ascendant, Pichai said the free and open internet “is being attacked”. Importantly, he didn’t refer to China directly but he went on to say: “None of our major products and services are available in China.”

With legislators and regulators proving slow, ineffective, and easy to lobby – and a pandemic taking up plenty of bandwidth – right now the democratic West is largely leaving it to people like Sundar Pichai to decide where we should all be heading.

Tech Tent: Do Covid apps work?

At the beginning of the coronavirus pandemic, governments around the world decided the smartphone could be a key weapon in their battle to stop the spread of Covid-19.

On this week’s Tech Tent, we ask whether their various approaches, from Bluetooth contact-tracing apps to smartphone surveillance, have made a difference.

Listen to the latest Tech Tent podcast on BBC Sounds
Listen live every Friday at 08.00 GMT on the BBC World Service
Presentational grey line
The technology adopted has varied according to local cultures. South Korea used smartphone data along with information from credit card payments and CCTV to track the movements of people infected with the virus.

Victim of success
Singapore used what appeared to be a less invasive method in developing a Bluetooth contact-tracing app but it ended up collecting a lot of data for the government.

Many European countries ended up with decentralised contact-tracing apps which handed over very little data to government or health authorities.

This was the approach eventually chosen for the NHS Covid-19 app in England and Wales, after an initial trial of a centralised app proved both controversial and technically disappointing.

Susan Landau, professor of cyber-security at America’s Tufts University, examines the various methods in her book People Count: Contact-Tracing Apps and Public Health. She tells Tech Tent how each has fared.

“South Korea has done quite well in controlling the disease. But one has to say that there are cultural aspects to this – as well as the technology, the willingness to wear masks.”

That, she explains, would not work in the United States.

Singapore’s centralised app has worked well, she says, because citizens have been required to use it in offices, shopping centres and schools. But they have also been forced to hand over lots of very sensitive data.

“It has been used for criminal investigations,” she says. “If you’re a journalist and people know proximity information, then they know who you’ve been talking to. And that, of course, can be really dangerous for human rights workers.”

As for the decentralised apps, their effectiveness “has become more clear with time”. Prof Landau points to a study in the journal Nature which showed that the NHS Covid-19 app had averted hundreds of thousands of cases of the virus.

But in the last week, the app seems to have become the victim of its own success. With cases of Covid rising rapidly, the number of alerts telling app users to go into isolation has soared.

With many businesses angry that employees are being sent home – in their view unnecessarily – by pings from the app, there is mounting pressure to change the way it works.

Politicians seem to be responding, briefing that its sensitivity may be tweaked, although at the same time the Department of Health says: “The app is doing exactly what it was designed to do.”

The technology being used for contact-tracing apps is far from perfect – Prof Landau points out that Bluetooth doesn’t know whether a contact takes place outside or inside, where the danger of infection is much higher.

But overall, automated contact tracing appears to be a useful addition to the old-fashioned manual variety – after all, people’s recollections of who they were in contact with a few days ago and at what distance may also be unreliable.

Susan Landau points out that there is also an economic and social context to be considered: “For people who are bus drivers, restaurant workers, food service people, an exposure notification – where they have to stay home from work, they’re not getting paid, they may, on the third exposure notification, lose their position – can be very expensive.”

It turns out that contact-tracing smartphone apps have been an experiment not just in technology but in psychology, politics and economics.

But whatever their faults, it seems likely that they will remain a weapon in the public health armoury, ready to be deployed when the next pandemic comes along.