Capitol riot: Man accused after Bumble dating app boast

Capitol riot: Man accused after Bumble dating app boastA man has been accused of taking part in the US Capitol riots after allegedly boasting about it on a dating app.

Robert Chapman, from New York, told a user he matched with on the dating app Bumble “I did storm the Capitol”, FBI court filings say.

They replied “we are not a match”, and shared a screenshot of the exchange with authorities.

Mr Chapman was arrested and charged in New York on Thursday, media reports say.

According to the court filings, Mr Chapman, from the town of Carmel in New York State, told his Bumble match: “I did storm the Capitol… I made it all the way into Statuary Hall!”

The FBI says police body-camera footage captures Mr Chapman in the Statuary Hall during the 6 January riots at the Capitol building in Washington DC.

A post on Mr Chapman’s public Facebook account, using the alias “Robert Erick”, said he was leaving New York City the day before the riot, the FBI said.

The next day, the account posted a photo showing him posing within the Capitol, captioned “INSIDE THE CRAPITOL!!!”.

Mr Chapman was arrested and charged with trespassing and disorderly conduct on restricted government property, according to NBC New York.

He had previously been arrested in New York in 2017, according to New York State Police.

The US justice department has charged more than 400 people with participation in the 6 January attack.

Federal prosecutors expect to charge at least 100 more people for taking part in the riots.

Jack Dorsey and Elon Musk agree on bitcoin’s green credentials

Tesla chief Elon Musk has agreed with Twitter boss Jack Dorsey, who has said that bitcoin “incentivises” renewable energy, despite experts warning otherwise.

The cyrptocurrency’s carbon footprint is as large of some of the world’s biggest cities, studies suggest.

But Mr Dorsey claims that could change if bitcoin miners worked hand-in-hand with renewable energy firms.

One expert said it was a “cynical attempt to greenwash” bitcoin.

Bitcoin could derail China’s climate change targets
Crypto trader Coinbase worth more than oil firm BP
China, where more than two-thirds of power is from coal, accounts for more than 75% of bitcoin mining around the world.

The mining process to generate new bitcoin involves solving complex mathematical equations, which requires large amounts of computing power.

New sets of transactions are added to bitcoin’s blockchain (the ledger that records the cryptocurrency’s transactions) every 10 minutes by miners from around the world.

In a tweet on Wednesday, Mr Dorsey said that “bitcoin incentivises renewable energy”, to which Mr Musk replied “True”.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
The tweet comes soon after the release of a White Paper from Mr Dorsey’s digital payment services firm Square, and global asset management business ARK Invest.

Entitled “Bitcoin as key to an abundant, clean energy future”, the paper argues that “bitcoin miners are unique energy buyers”, because they offer flexibility, pay in a cryptocurrency, and can be based anywhere with an internet connection.

“By combining miners with renewables and storage projects, we believe it could improve the returns for project investors and developers, moving more solar and wind projects into profitable territory,” it said.

Author and bitcoin critic David Gerard described the paper as a “cynical exercise in bitcoin greenwashing”.

“The reality is: bitcoin runs on coal,” he told the BBC.

He gave the example of how an accident at a coal mine in Xinjiang meant it had to temporarily close, causing power cuts across the area and crippling the ability to mine new bitcoins.

“This slowed the blockchain down considerably… and coincided with the recent bitcoin price drop,” he said.

“Bitcoin mining is so ghastly and egregious that the number one job of bitcoin promoters is to make excuses for it – any excuse at all.”

China v Iceland
One bitcoin is currently worth $53,000 (£38,000) and the price hike has led to a surge in demand for new coins.

A recent study suggested that the amount of bitcoin mining happening in China could threaten the country’s emission reductions targets.

Meanwhile, analysis by the University of Cambridge suggests the bitcoin network uses more than 121 terawatt-hours (TWh) annually, which would rank it in the top 30 electricity consumers worldwide if it was a country.

But there are some cryptocurrency miners based in countries such as Iceland and Norway, where most energy production is almost 100% renewable, via hydro-electricity and geothermal energy.

Writing in Medium, bitcoin expert Phil Geiger pointed out that “bitcoin mining rewards the most energy efficient miners with the most profit”.

“Mining is about maximising the number of hashes (computations) per kW of electricity,” he wrote.

“Currently, the most efficient way to generate the highest hashes/k is through the use of solar energy and hydro-electric, because those are the cheapest ways to produce electricity.”

TikTok loophole sees users post pornographic and violent videos

TikTok is banning users who are posting pornographic and violent videos as their profile pictures to circumvent moderation, in a unique viral trend.

BBC News alerted the app to the “Don’t search this up,” craze, which has accumulated nearly 50 million views.

TikTok has also banned the hashtags used to promote the offending profiles and is deleting the videos.

Users say the trend is encouraging pranksters to post the most offensive or disgusting material they can find.

BBC News has seen clips of hardcore pornography shared as profile pictures on the app, as well as an Islamic State group video of the murder of Jordanian pilot Muadh al-Kasasbeh, burned to death in a cage in 2015.

‘Especially worrying’
Tom, a teenager from Germany, whose surname we have agreed not to use, first contacted BBC News about the trend.

He said: “I’ve seen gore and hardcore porn and I’m really concerned about this because so many kids use TikTok.

“I find it especially worrying that there are posts with millions of views specifically pointing out these profiles, yet it takes ages for TikTok to act.”

Tom added he had reported multiple users to the app for posting the offensive profile-picture videos.

On the platform users can put videos as their profile pictures, rather than a still image.

Videos about the trend are being posted in English and Spanish languages on the app.

The offending accounts are often named with a random jumble of letters and words and have no actual TikTok videos uploaded to them apart from inside the profile box.

Some of the accounts have tens of thousands of followers waiting for the profile picture to be changed to something shocking.

Regular users are able to find these accounts by watching otherwise mundane TikTok videos advertising usernames people should “search up” to find the most shocking clips.

And these videos are being recommended by the TikTok algorithm on the app’s main For You Page.

Experts say the trend is unique and highlights an unknown vulnerability in TikTok.

“TikTok has a rather positive reputation when it comes to trust and safety so I would not fault them for not having moderation policies and tools for a toxic phenomenon they were not aware of,” Roi Carthy, from social-media moderation company L1ght, said.

“Clearly TikTok is working to stop this trend – but for users, as always, vigilance is key.

“TikTok fans should be conscious of the risks on the platform and parents should make an effort to learn what new apps their children are using, along with the content and human dynamics they are exposed to.”

How did TikTok grow to 800 million users?
Body-editing apps ‘trigger eating disorders’
TikTok alters virtual gifts policy after BBC probe
TikTok is one of the world’s fastest growing apps, with an estimated 700 million monthly active users around the world.

A spokesman for the company said: “We have permanently banned accounts that attempted to circumvent our rules via their profile photo and we have disabled hashtags including #dontsearchthisup.

“Our safety team is continuing their analysis and we will continue to take all necessary steps to keep our community safe.”

Disabling a hashtag meant the term could not be found in search or created again, TikTok added.

Its moderation teams are also understood to be analysing and removing similar copycat terms being used for the trend.

BBC News tried to speak to one user who posted an explicit video to their profile but did not receive a response.

Apple event: AirTag, iPad and iMac lead line-up

Apple has shown off its latest product line-up in its first big event of 2021.

The firm is increasing the number of products which will contain its own in-house developed M1 chip, as the sector struggles with a global semiconductor shortage.

It finally unveiled its much-anticipated tracker tile, the AirTag, which will launch at the end of April.

And it announced its first significant update to the iconic iMac desktop computer in recent years.

It also showcased a new iPad Pro, complete with M1 chip and 5G connectivity.

What was not discussed, but was later shared by Apple, was the roll-out of the latest version of its operating system, iOS 14.5, which will include a controversial update limiting what app owners can see about user activity outside of their own apps without permission.

It’s a popular business model for “free” internet services, including Facebook, which use the data they gather on their members’ online habits to target advertising.

AirTags

AirTags have been highly-anticipated by Apple fans. They are small round disks which can be attached to anything, and transmit a Bluetooth signal to a home gadget – an iPad or iPhone – to alert the user to their location.

The original Tile tracker, a rival product, launched following a successful crowdfunding campaign in 2014.

AirTags will cost £29 ($29 in US) and are due to launch on April 30, said Apple engineering program manager Carolyn Wolfman-Estrada. They work with all devices containing the U1 chip, which includes the iPhone 11 (which launched in 2019) and later models.

She also said they were designed to “track items not people”, with features such as rotating identifiers and audible alerts from unknown tags built in to protect privacy.

Colleen Novielli showed off new super-thin iMacs in seven different colours, built with Apple’s new M1 chip, and a 24in, 11.3 million pixel screen, along with a revamped camera and other upgrades. One of Apple’s older products, the iMac has not seen a significant revamp for years.

However, it only has a top RAM (the device’s short-term memory) of 16GB, putting it on a par with the older Mac Mini.

“It is little surprise Apple has resisted updating the iMac over the last few years, given the transformational impact the M1 architecture has had on the overall design,” commented Leo Gebbie from CCS insight.

“This is an endorsement of Apple’s multi-year, multi-billion-dollar investment in creating its own silicon platforms which now power all its key devices.”

5G comes to iPad
The M1 chip, an ultra-wide camera and 5G data connectivity are all coming to the new iPad Pro, said product manager Raja Bose.

The M1 chip will make the latest version’s graphics performance 1,500 times faster than the original, he said, in a presentation which focused on the iPad’s graphics and gaming benefits.

The new model will also feature up to 2 terabytes of storage.

According to analysts IDC, the tablet market overall grew in 2020 for the first time in seven years, with sales driven by consumers and education providers during the global pandemic.

“Tablets emerged as a reliable alternative for consumers to meet their needs for content consumption and provide access to remote schooling during the lockdown,” said Daniel Goncalves from IDC.

Mr Goncalves added that many households had bought extra tablets in the last 12 months, in order to keep up with their increased use.

Apple also announced a new subscription podcast platform, in keeping with its move towards streaming services, first announced in 2019.

And it showed off a new 4K Apple TV, with a re-designed Siri remote and HDR (High Dynamic Range). HDR offers a greater range of colours, making pictures more vivid and realistic. Sky launched an HDR service last year for a selection of its nature documentaries – but only to customers with specific premium subscriptions.

This launch wasn’t exactly chock-a-block with surprises.

The move to Apple’s own silicon has so far been a success.

The new M1 chip Macbooks have had rave reviews – so it was only a matter of time before Apple switched its other products over to the new design.

The new iMacs and iPads will be the fastest and most powerful ever – though it would of course be preposterous if they weren’t.

Interestingly Apple also announced it’s much speculated AirTags.

You’ll be able to attach it to your keys or your dog’s collar, anything. If you lose them, you’ll be able to find it using your phone.

This looks very much like a product already on the market, Tile – though AirTags appears at first view to have a different tracking system.

This is another example in the long line of Big Tech companies borrowing/copying ideas from smaller companies, and it’s these kinds of moves that many US lawmakers don’t like.

And this launch was interesting too for what wasn’t discussed.

It was speculated that Apple might launch its new app tracking transparency feature – and we now know the latest version of its operating system, iOS 14.5, is due out next week.

Apple wants to limit how companies follow you on the internet on Apple devices. It says it wants to protect its customer’s privacy.

Platforms like Facebook, that relies heavily on ad revenue, is against the move.

However this evening’s event was all about the products.

5G: Rural areas to be allowed taller and wider masts

The government is to allow taller and wider mobile phone masts to be built across the English countryside to speed up the 5G network rollout.

Parliament will vote on the plans, which the government says would improve rural coverage, while “minimising any visual impact” on the landscape.

Masts on public land have been limited to 20m (66ft) tall, but new masts can be up to 30m, and existing ones 25m.

Mobile firms will need permission from local councils to build new masts.

Stricter rules will apply in protected areas, including national parks, conservation areas and areas of outstanding natural beauty.

Some people are opposed to mobile phone masts because of their appearance.

Last month, one resident of the village of Scholes in West Yorkshire branded its two new 20m 5G masts an “absolute eyesore”. Local councillor Kath Pinnock said the fact that planning permission had not been required for them amounted to an attack on democratic rights.

The government launched a consultation on the plans in 2019. Former culture secretary Nicky Morgan said at the time that there had to be “a balance struck” between landscape and connectivity.

5G uses higher frequency waves so more devices can be connected to it, and have faster internet speeds.

However, they require more transmitter masts to enable the technology.

Mobile firms will be “incentivised” to focus on improving existing masts rather than building new ones, the Department for Digital, Culture, Media and Sport said.

“We want to level up the country and end the plague of patchy and poor mobile signals in rural communities,” said Culture Secretary Oliver Dowden.

“These practical changes strike a careful balance between removing unnecessary barriers holding back better coverage, while making sure we protect our precious landscape.”

New or improved masts will help deliver the government’s £1bn Shared Rural Network plans.

The government has committed to extending 4G coverage to 95% of the UK – regardless of which network customers use – by 2025.

Leading mobile operators EE, O2, Three and Vodafone will share network equipment and build new masts as part of the scheme.

Mobile UK, which represents the four networks, welcomed the proposals and said the legislative changes should be brought “as quickly as possible” to ensure “ambitious targets” were met.

The proposals include:

Existing mobile masts to be strengthened without prior approval, so that they can be upgraded for 5G
In unprotected areas, mast increases can be up to a maximum of 25m (previously 20m)
Greater height increases will also be permitted, subject to approval by the local authority
New masts can be built up to five metres higher – a maximum of 30m in unprotected areas and 25m in protected areas, subject to approval by the planning authority
‘Digital divide’
The changes are long overdue and will help to address a “worrying digital divide” in rural communities, CCS Insight analyst Kester Mann said.

“Dated planning laws have hindered mobile network deployment for years,” he added. “Taller masts enable signals to travel further, providing coverage to residents and visitors over a wider area.

“But it is vital to strike the right balance between improving connectivity and preserving the beauty of the countryside.”

Quality of life
Paul Lee, global head of research at Deloitte, said: “Since the pandemic started, voice calls have been upgraded to video calls; 2G is fine for the former but cannot support the higher bandwidth video conference call, but 4G can.

“4G in rural areas would enable not just consumers to be more connected, but for its residents to have more options to work from rural locations, offering most likely a better quality of life.

“Essential services such as healthcare could also be delivered over a 4G network more readily than a 2G one.”

Peloton safety: US regulators warn against using treadmill near children

Safety regulators in the US have urged owners of a Peloton treadmill to stop using the product “immediately” if they have children or pets in the home.

The Consumer Product Safety Commission (CPSC) says it is aware of 39 incidents including one death, involving the Peloton Tread+.

Peloton had already confirmed the death last month and said children should stay away from the machines.

On Saturday, Peloton called the CPSC’s warning “inaccurate and misleading”.

Peloton sells cycling machines and treadmills that can be connected online to virtual fitness classes. Its business has boomed as people look for alternative ways to exercise during lockdowns and gym closures.

Peloton warning after ‘tragic’ child death
Peloton bike ad labelled ‘sexist’ and ‘dystopian’
The CPSC issued the “urgent warning” on Saturday, releasing video footage of a child being dragged under one of the machines.

It said the machine posed “serious risks to children for abrasions, fractures, and death”.

The BBC is not responsible for the content of external sites.
View original tweet on Twitter
The CPSC said it believed at least one of the 39 incidents occurred while a parent was running on the treadmill. It also warned that children could be seriously injured while the Tread+ is being used by an adult and not just during cases where children have been left unsupervised.

The regulator advised owners of the machine to use it only in a locked room and to keep objects such as exercise balls away from it.

Peloton said in a press release that the machine was “safe for members to use in their homes and comes with safety instructions and warnings to ensure its safe use”.

“Like all motorised exercise equipment, the Tread+ can pose hazards if the warnings and safety instructions are not followed,” it added.

The company told owners of the machine not to let children under the age of 16 use the treadmill and to keep children, pets, and objects away from the Tread+ at all times.

Facebook faces mass legal action over data leak

Facebook users whose data was compromised by a massive data leak are being urged to take legal action against the tech giant.

About 530 million people had some personal information leaked, including, in some cases, phone numbers.

A digital privacy group is preparing to take a case to the Irish courts on behalf of EU citizens affected.

Facebook denies wrongdoing, saying the data was “scraped” from publicly available information on the site.

Sinn Féin ‘questioned over Facebook data use’
Facebook urged to end Instagram for children idea
Antoin Ó Lachtnain, director of Digital Rights Ireland (DRI), warned other tech giants its move could be the beginning of a domino effect.

“This will be the first mass action of its kind but we’re sure it won’t be the last,” he said.

“The scale of this breach, and the depth of personal information compromised, is gob-smacking.”

He added: “The laws are there to protect consumers and their personal data and it’s time these technology giants wake up to the reality that protection of personal data must be taken seriously.”

Was your number leaked in Facebook data breach?
Facebook faces investigation over data breach
DRI claims Facebook failed to protect user data and notify those who had been affected.

The data leak was first discovered and fixed in 2019, but was recently made easily available online for free.

DRI said individual users who take part in the legal action could be offered compensation of up to €12,000 (£10,445) if it is successful – based on what it says are similar cases in other countries.

‘Domino effect’
“If successful this could well set a precedent and open the door to further class action down the line,” Ray Walsh, a digital privacy expert at ProPrivacy, told the BBC.

“Big Tech might then find that being made to compensate individual users is a strong reminder to work harder on privacy compliance,” he added.

On Thursday, the Irish Data Protection Commission announced its decision to launch an investigation into the leak.

It will assess whether any parts of the GDPR or Data Protection Act 2018 were infringed by Facebook.

If found to be in breach, the social media giant could face fines of up to 4% of its turnover.

Responding to DRI’s legal case, a Facebook spokesman said: “We understand people’s concerns, which is why we continue to strengthen our systems to make scraping from Facebook without our permission more difficult and go after the people behind it.”

He also pointed to other firms involved in similar recent leaks.

“As LinkedIn and Clubhouse have shown, no company can completely eliminate scraping or prevent data sets like these from appearing. That’s why we devote substantial resources to combat it and will continue to build out our capabilities to help stay ahead of this challenge,” he said.

Europe seeks to limit use of AI in society

The use of facial recognition for surveillance, or algorithms that manipulate human behaviour, will be banned under proposed EU regulations on artificial intelligence.

The wide-ranging proposals, which were leaked ahead of their official publication, also promised tough new rules for what they deem high-risk AI.

That includes algorithms used by the police and in recruitment.

Experts said the rules were vague and contained loopholes.

The use of AI in the military is exempt, as are systems used by authorities in order to safeguard public security.

The suggested list of banned AI systems includes:

those designed or used in a manner that manipulates human behaviour, opinions or decisions …causing a person to behave, form an opinion or take a decision to their detriment
AI systems used for indiscriminate surveillance applied in a generalised manner
AI systems used for social scoring
those that exploit information or predictions and a person or group of persons in order to target their vulnerabilities
European policy analyst Daniel Leufer tweeted that the definitions were very open to interpretation.

“How do we determine what is to somebody’s detriment? And who assesses this?” he wrote.

For AI deemed to be high risk, member states would have to apply far more oversight, including the need to appoint assessment bodies to test, certify and inspect these systems.

And any companies that develop prohibited services, or fail to supply correct information about them, could face fines of up to 4% of their global revenue, similar to fines for GDPR breaches.

High-risk examples of AI include:

systems which establish priority in the dispatching of emergency services
systems determining access to or assigning people to educational institutes
recruitment algorithms
those that evaluate credit worthiness
those for making individual risk assessments
crime-predicting algorithms
Mr Leufer added that the proposals should “be expanded to include all public sector AI systems, regardless of their assigned risk level”.

“This is because people typically do not have a choice about whether or not to interact with an AI system in the public sector.”

As well as requiring that new AI systems have human oversight, the EC is also proposing that high risk AI systems have a so-called kill switch, which could either be a stop button or some other procedure to instantly turn the system off if needed.

“AI vendors will be extremely focussed on these proposals, as it will require a fundamental shift in how AI is designed,” said Herbert Swaniker, a lawyer at Clifford Chance.

Sloppy and dangerous
Meanwhile Michael Veale, a lecturer in digital rights and regulation at University College London, highlighted a clause that will force organisations to disclose when they are using deepfakes, a particularly controversial use of AI to create fake humans or to manipulate images and videos of real people.

He also told the BBC that the legislation was primarily “aimed at vendors and consultants selling – often nonsense- AI technology to schools, hospitals, police and employers”.

But he added that tech firms who used AI “to manipulate users” may also have to change their practices.

With this legislation, the EC has had to walk a difficult tightrope between ensuring AI is used for what it calls “a tool… with the ultimate aim of increasing human wellbeing”, and also ensuring it doesn’t stop EU countries competing with the US and China over technological innovations.

And it acknowledged that AI already informed many aspects of our lives.

The European Centre for Not-for-Profit Law, which had contributed to the European Commission’s White Paper on AI, told the BBC that there was “lots of vagueness and loopholes” in the proposed legislation.

“The EU’s approach to binary-defining high versus low risk is sloppy at best and dangerous at worst, as it lacks context and nuances needed for the complex AI ecosystem already existing today.

“First, the commission should consider risks of AI systems within a rights-based framework – as risks they pose to human rights, rule of law and democracy.

“Second, the commission should reject an oversimplified low-high risk structure and consider a tier-based approach on the levels of AI risk.”

The details could change again before the rules are officially unveiled next week. And it is unlikely to become law for several more years.

NHS Covid-19 app update blocked for breaking Apple and Google’s rules

An update to England and Wales’s contact tracing app has been blocked for breaking the terms of an agreement made with Apple and Google.

The plan had been to ask users to upload logs of venue check-ins – carried out via poster barcode scans – if they tested positive for the virus. This could be used to warn others.

The update had been timed to coincide with the relaxation of lockdown rules.

But the two firms had explicitly banned such a function from the start.

Under the terms that all health authorities signed up to in order to use Apple and Google’s privacy-centric contact-tracing tech, they had to agree not to collect any location data via the software.

As a result, Apple and Google refused to make the update available for download from their app stores last week, and have instead kept the old version live.

When questioned, the Department of Health declined to discuss how this misstep had occurred.

Scotland has avoided this pitfall because it released a separate product – Check In Scotland – to share venue histories, rather than trying to build the functionality into its Protect Scotland contact-tracing app.

Virus hotspots
NHS Covid-19’s users have long been able to scan a QR code when entering a shop, restaurant or other venue to log within the app the fact that they had visited.

But this data has never been accessible to others.

Instead, it has only come into use if local authorities have identified a location as being a virus hotspot by other means, and flagged the fact to a central database.

Since each phone regularly checks the database for a match, it can alert the owner if they need to take action as a consequence, without sharing the information with others.

However, this facility has rarely been used, in part because prior to the most recent lockdown, many local authorities were confused about what they were supposed to do.

Before shops reopened in England and Wales on Monday, along with outdoor hospitality venues in England, the intention had been to automate the process.

This would have involved users who had tested positive being asked if they were willing to upload their logs.

Depending on the thresholds set – for example, how many infected users registered having visited the same place on the same day – other app users would then have been told to either monitor their symptoms or immediately get a test, whether they felt ill or not.

The Department of Health had described this as being a “privacy-protecting” approach.

But despite being opt-in, it was still a clear breach of the terms that health chiefs had agreed to when they switched to adopting Apple and Google’s contact tracing API (application programming interface) in June 2020. This was after their original effort was found to miss too many potential cases of contagion.

Setting a precedent
The tech firms’ Exposure Notifications System FAQ states that apps involved must “not share location data from the user’s device with the public health authority, Apple, or Google”.

And a separate document covering the terms and conditions in more detail says that “a contact tracing app may not use location-based APIs… and may not collect any device information to identify the precise location of users”.

Had Apple and Google made an exception for England and Wales in this case, it could have set a precedent for other countries to have sought changes of their own.

The team behind the app was told not to disclose why the update had failed to be released on schedule.

A spokeswoman for the Department of Health told the BBC: “The deployment of the functionality of the NHS Covid-19 app to enable users to upload their venue history has been delayed.

“This does not impact the functionality of the app and we remain in discussions with our partners to provide beneficial updates to the app which protect the public.”

A spokeswoman for the Welsh government said it had nothing to add.

Apple has indicated it wants to work out a solution.

Just a week ago, the Department of Health seemed to think this update to the app would go through without problems.

It’s hard to understand why. After all, the rules for using the Apple-Google Exposure Notification System were clear – collecting any location data was a no-no.

The app team knew that when they switched to it last summer, having discovered that going it alone with their own system was just not practical.

But they may have assumed that, because the sharing of locations by users was optional, the tech giants might show some flexibility.

Instead, Apple and Google have insisted that rules are rules.

What this underlines is that governments around the world have been forced to frame part of their response to the global pandemic according to rules set down by giant unelected corporations.

At a time when the power of the tech giants is under the microscope as never before, that will leave many people feeling uncomfortable.

Facebook tweaks its news feed with new controls

Facebook is adding features to users’ main news feed, including letting them limit comment on their public posts to friends or even specific tagged people.

The feature is similar to one rolled out by Twitter last year.

Facebook will also make it easier to revert to “chronological” mode, with the latest posts displayed first.

The company abandoned this model in 2009, although the option remained buried in a menu, in favour of using algorithms to rank the content shown.

But on Wednesday, vice-president Nick Clegg said a lack of transparency and information available to users had led to mistrust.

And so Facebook has moved its “most recent” option to a new “feed filter bar”, at the top of the news feed, which also allows users to prioritise posts from up to 30 friends or particular pages, a feature introduced in October last year, before reverting to algorithmic content ranking.

Tech giants grilled about fake news by US Congress
Facebook News feature launches in UK
Despite the change, Mr Clegg, the former Deputy Prime Minister, said Facebook’s algorithmic content ranking – and personalised advertising – continued to offer “so many benefits to society”.

Comparing its use of algorithms to a romantic couple in which ones does the shopping and the other cooks dinner, he said: “Content ranking is a dynamic partnership between people and algorithms.

“Your news feed is shaped heavily by your choices and actions.”

In a further bid for transparency, the news feed’s “Why am I seeing this?” links will now detail the criteria for recommended posts – from people and organisations users do not follow – such as having recently interacted with a similar post.

The new controls would be available for Android app users first, Facebook said, with an iOS update for Apple phones in the coming weeks.