TikTok is being investigated by France’s data watchdog

More worries for TikTok: A European data watchdog that’s landed the biggest blow on a tech giant to date — slapping Google with a $57M fine last year (upheld in June) — now has an open investigation into social video app du jour, TechCrunch has confirmed.

A spokeswoman for France’s CNIL told us it opened an investigation into how the app handles user data in May 2020, following a complaint related to a request to delete a video. Its probe of the video sharing platform was reported earlier by Politico.

Under the European Union’s data protection framework, citizens who have given consent for their data to be processed continue to hold a range of rights attached to their personal data, including the ability to request a copy or deletion of the information, or ask for their data in a portable form.

Additional requirements under the EU’s GDPR (General Data Protection Regulation) include transparency obligations to ensure accountability with the framework. Which means data controllers must provide data subjects with clear information on the purposes of processing — including in order to obtain legally valid consent to process the data.

The CNIL’s spokeswoman told us its complaint-triggered investigation into TikTok has since widened to include issues related to transparency requirements about how it processes user data; users’ data access rights; transfers of user data outside the EU; and steps the platform takes to ensure the data of minors is adequately protected — a key issue, given the app’s popularity with teens.

French data protection law lets children consent to the processing of their data for information social services such as TikTok at aged 15 (or younger with parental consent).

As regards the original complaint, the CNIL’s spokeswoman said the person in question has since been “invited to exercise his rights with TikTok under the GDPR, which he had not taken beforehand” (via Google Translate).

We’ve reached out to TikTok for comment on the CNIL investigation.

It’s not clear whether the French watchdog will be able to see its investigation of TikTok to full conclusion.

In further emailed remarks its spokeswoman noted the company is seeking to designate Ireland’s Data Protection Commission (DPC) as its lead authority in Europe — and is setting up an establishment in Ireland for that purpose. (Related: Last week TikTok announced a plan to open its first data center in Europe, which will eventually hold all EU users’ data, also in Ireland.)

If TikTok is able to satisfy the legal conditions it may be able to move any GDPR investigation to the DPC — which has gained a reputation for being painstakingly slow to enforce complex cross-border GDPR cases. Though in late May it finally submitted a first draft decision (on a Twitter case) to the other EU data watchdogs for review. A final decision in that case is still pending.

“The [TikTok] investigations could therefore ultimately be the sole responsibility of the Irish protection authority, which will have to deal with the case in cooperation with the other European data protection authorities,” the CNIL’s spokeswoman noted, before emphasizing there is a standard of proof it will have to meet.

“To come under the sole jurisdiction of the Irish authority and not of each of the authorities, Tiktok will nevertheless have to prove that its establishment in Ireland fulfils the conditions of a ‘principal establishment’ within the meaning of the GDPR.”

Under the framework data watchdogs have powers to issue penalties of up to 4% of a company’s global annual turnover and can order infringing data processing to cease.

Google, Nokia, Qualcomm are investors in $230M Series A2 for Finnish phone maker, HMD Global

Mobile device maker HMD Global has announced a $230M Series A2 — its first tranche of external funding since a $100M round back in 2018 when it tipped over into a unicorn valuation. Since late 2016 the startup has exclusively licensed Nokia’s brand for mobile devices, going on to ship some 240M devices to date.

Its latest cash injection is notable both for its size (HMD claims it as the third largest funding round in Europe this year); and the profile of the strategic investors ploughing in capital — namely: Google, Nokia and Qualcomm.

Though whether a tech giant (Google) whose OS dominates the world’s smartphone market (Android) becoming a strategic investor in Europe’s last significant mobile OEM (HMD) catches the attention of regional competition enforcers remains to be seen. Er, vertical integration anyone? (To wit: It’s a little over two years since Google was slapped with a $5BN penalty by EU regulators for antitrust violations related to how it operates Android — and the Commission has said it continues to monitor the market ‘remedies’.)

In a further quirk, when we spoke to HMD Global CEO, Florian Seiche, ahead of today’s announcement, he didn’t expect the names of the investors to be disclosed — but we’d already been sent press release material listing them so he duly confirmed the trio are investors in the round. (But wouldn’t be drawn on how much equity Google is grabbing.)

HMD’s smartphones run on Google’s Android platform, which gives the tech giant a firm business reason for supporting the mobile maker in growing the availability of Google-packed hardware in key growth markets around the world.

And while HMD likens its consistent (and consistently updated) flavor of Android to the premium ‘pure’ Android experience you get from Google’s own-brand Pixel smartphones, the difference is the Finnish company offers devices across the range of price points, and targets hardware at mobile users in developing markets.

The upshot is relatively little overlap with Google’s Pixel hardware, and still plenty of business upside for Google should HMD grow the pipeline of Google services users (as it makes money by targeting ads).

Connoisseurs of mobile history may see more than a little irony in Google investing into Nokia branded smartphones (via HMD), given Android’s role in fatally disrupting Nokia’s lucrative smartphone business — knocking the Finnish giant off its perch as the world’s number one mobile maker and ushering in an era of Android-fuelled Asian mobile giants. But wait long enough in tech and what goes around oftentimes comes back around.

“We’re extremely excited,” said Seiche, when we mention Google’s pivotal role in Nokia’s historical downfall in smartphones. “How we are going to write that next chapter on smartphones is a critical strategic pillar for the company and our opportunity to team up so closely with Google around this has been a very, very great partnership from the beginning. And then this investment definitely confirms that — also for the future.”

“It’s a critical time for the industry therefore having a clear strategy — having a clear differentiation and a different point of view to offer, we believe, is a fantastic asset that we have developed for ourselves. And now is a great moment for us to double down on this,” he added.

We also asked Seiche whether HMD has any interest in taking advantage of the European Commission’s Android antitrust enforcement decision — i.e. to fork Android and remove the usual Google services, perhaps swapping them out for some European alternatives, which is at least a possibility for OEMs selling in the region — but Seiche told us: “We have looked at it but we strongly believe that consumers or enterprise customers actually love [Google] services and therefore they choose those services for themselves.” (Millions of dollars of direct investment from Google also, presumably, helps make the Google services business case stack up.)

Nokia, meanwhile, has always had a close relationship with HMD — which was established by former Nokia execs for the sole purpose of licensing its iconic mobile brand. (The backstory there is a clause in the sale terms of Nokia’s mobile device division to Microsoft expired in 2016, paving the way for Nokia’s brand to be returned to the smartphone market without the prior Windows Mobile baggage.)

Its investment into HMD now looks like a vote of confidence in how the company has been executing in the fiercely competitive mobile space to date (HMD doesn’t break out a lot of detail about device sales but Seiche told us it sold in excess of 70M mobiles last year; that’s a combined figure for smartphones and feature phones) — as well as an upbeat assessment of the scope of the growth opportunity ahead of it.

On the latter front US-led geopolitical tensions between the West and China do look poised to generate a tail-wind for HMD’s business.

Mobile chipmaker Qualcomm, for example, is facing a loss of business, as US government restrictions threaten its ability to continue selling chips to Huawei; a major Chinese device maker that’s become a key target for US president Trump. Its interest in supporting HMD’s growth, therefore, looks like a way for Qualcomm to hedge against US government disruption aimed at Chinese firms in its mobile device maker portfolio.

While with Trump’s recent threats against the TikTok app it seems safe to assume that no tech company with a Chinese owner is safe.

As a European company, HMD is able to position itself as a safe haven — and Seiche’s sales pitch talks up a focus on security detail and overall quality of experience as key differentiating factors vs the Android hoards.

“We have been very clear and very consistent right from the beginning to pick these core principles that are close to our heart and very closely linked with the Nokia brand itself — and definitely security, quality and trust are key elements,” he told TechCrunch. “This is resonating with our carrier and retail customers around the world and it is definitely also a core fundamental differentiator that those partners that are taking a longer term view clearly see that same opportunity that we see for us going forward.”

HMD does use manufacturing facilities in China, as well as in a number of other locations around the world — including Brazil, India, Indonesia and Vietnam.

But asked whether it sees any supply chain risks related to continued use of Chinese manufacturers to build ‘secure’ mobile hardware, Seiche responded by claiming: “The most important [factor] is we do control the software experience fully.” He pointed specifically to HMD’s acquisition of Valona Labs earlier this year. The Finnish security startup carries out all its software audits. “They basically control our software to make sure we can live up to that trusted standard,” Seiche added. 

Landing a major tranche of new funding now — and with geopolitical tension between the West and the Far East shining a spotlight on its value as alternative, European mobile maker — HMD is eyeing expansion in growth markets such as Africa, Brail and India. (Currently, HMD said it’s active in 91 markets across eight regions, with its devices ranged in 250,000 retail outlets around the world.)

It’s also looking to bring 5G to devices at a greater range of price-points, beyond the current flagship Nokia 8.3. Seiche also said it wants to do more on the mobile services side. HMD’s first 5G device, the flagship Nokia 8.3, is due to land in the US and Europe in a matter of weeks. And Seiche suggested a timeframe of the middle of next year for launching a 5G device at a mid tier price point.

“The 5G journey again has started, in terms of market adoption, in China. But now Europe, US are the key next opportunity — not just in the premium tier but also in the mid segment. And to get to that as fast as possible is one of our goals,” he said, noting joint-working with Qualcomm on that.

“We also see great opportunity with Nokia in that 5G transition — because they are also working on a lot of private LTE deployments which is also an interesting area since… we are also very strongly present in that large enterprise segment,” he added.

On mobile services, Seiche highlighted the launch of HMD Connect: A data SIM aimed at travellers — suggesting it could expand into additional connectivity offers in future, forging more partnerships with carriers. 

“We have already launched several services that are close to the hardware business — like insurance for your smartphones — but we are also now looking at connectivity as a great area for us,” he said. “The first pilot of that has been our global roaming but we believe there is a play in the future for consumers or enterprise customers to get their connectivity directly with their device. And we’re partnering also with operators to make that happen.”

“You can see us more as a complement [to carriers],” he added, arguing that business “dynamics” for carriers have also changed substantially — and customer acquisition hasn’t been a linear game for some time.

“In a similar way when we talk about Google Pixel vs us — we have a different footprint. And again if you look at carriers where they get their subscribers from today is already today a mix between their own direct channels and their partner channels. And actually why wouldn’t a smartphone player be a natural good partner of choice also for them? So I think you’ll see that as a trend, potentially, evolving in the next couple of years.”

Court finds some fault with UK police force’s use of facial recognition tech

Civil rights campaigners in the UK have won a legal challenge to South Wales Police’s (SWP) use of facial recognition technology. The win on appeal is being hailed as a “world-first” victory in the fight against the use of an “oppressive surveillance tool”, as human rights group Liberty puts it.

However the police force does not intend to appeal the ruling — and has said it remains committed to “careful” use of the tech.

The back story here is SWP has been trialing automated facial recognition (AFR) technology since 2017, deploying a system known as AFR Locate on around 50 occasions between May 2017 and April 2019 at a variety of public events in Wales.

The force has used the technology in conjunction with watchlists of between 400-800 people — which included persons wanted on warrants; persons who had escaped from custody; persons suspected of having committed crimes; persons who may be in need of protection; vulnerable persons; persons of possible interest to it for intelligence purposes; and persons whose presence at a particular event causes particular concern, per a press summary issued by the appeals court.

A challenge was brought to SWP’s use of AFR by a Cardiff-based civil liberties campaigner, called Edward Bridges, with support from Liberty . Bridges was in the vicinity of two deployments of AFR Locate — first on December 21, 2017 in Cardiff city centre and again on March 27, 2018 at the Defence Procurement, Research, Technology and Exportability Exhibition taking place in the city — and while he was not himself included on a force watchlist he contends that given his proximity to the cameras his image was recorded by the system, even if deleted almost immediately after.

The human rights implications of warrantless processing of sensitive personal data by the police is the core issue in the case. The issue of bias risks that can flow from automating identity decisions is another key consideration.

Bridges initially brought a claim for judicial review on the basis that AFR was not compatible with the right to respect for private life under Article 8 of the European Convention on Human Rights, data protection legislation, and the Public Sector Equality Duty (“PSED”) under section 149 of the Equality Act 2010.

The divisional court dismissed his appeal on all grounds last September. He then appealed on five grounds — and has succeeded on three under today’s unanimous court of appeal decision.

The court judged that the legal framework and policies used by SWP did not provide clear guidance on where AFR Locate could be used and who could be put on a watchlist — finding too broad a discretion was afforded to police officers to meet the standard required by Article 8(2) of the European Convention on Human Rights.

It also found that an inadequate data protection impact assessment was carried out, given SWP had written the document on the basis of no infringement of Article 8, meaning the force had failed to comply with the UK’s Data Protection Act 2018.

The court also judged the force wrong to hold that it had complied with the PSED — because it had not taken reasonable steps to make enquiries about whether the AFR Locate software contained bias on racial or sex grounds. (Though the court noted there was no clear evidence the tool was so biased.)

Since Bridges brought the challenge London’s Met police has gone ahead and switched on operational use of facial recognition technology — flipping the switch at the start of this year. Although in its case a private company (NEC) is operating the system.

At the time of the Met announcement, Liberty branded the move “dangerous, oppressive and completely unjustified”. In a press release today it suggests the Met deployment may be unlawful for similar reasons as the SWP’s use of the tech — citing a review the force carried out. Civil liberties campaigners, AI ethicists and privacy experts have all accused the Met of ignoring the findings of an independent report which concluded it had failed to consider human rights impacts.

Commenting on today’s appeals court ruling in a statement, Liberty lawyer Megan Goulding said: “This judgment is a major victory in the fight against discriminatory and oppressive facial recognition. The Court has agreed that this dystopian surveillance tool violates our rights and threatens our liberties. Facial recognition discriminates against people of colour, and it is absolutely right that the Court found that South Wales Police had failed in their duty to investigate and avoid discrimination.

“It is time for the Government to recognise the serious dangers of this intrusive technology. Facial recognition is a threat to our freedom — it needs to be banned.”

In another supporting statement, Bridges added: “I’m delighted that the Court has agreed that facial recognition clearly threatens our rights. This technology is an intrusive and discriminatory mass surveillance tool. For three years now South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.”

However it’s important to note that he did not win his appeal on all grounds.

Notably the court held that the earlier court had correctly conducted a weighing exercise to determine whether the police force’s use of AFR was a proportionate interference with human rights law, when it considered “the actual and anticipated benefits” of AFR Locate vs the impact of the AFR deployment on Bridges — and decided that the benefits were potentially great, while the individual impact was minor, hence holding that the use of AFR was proportionate under Article 8(2).

So the UK court does not appear to have closed the door on police use of facial recognition technology entirely.

Indeed, it’s signalled that individual rights impacts can be balanced against a ‘greater good’ potential benefit — so the ruling looks more like it’s defining how such intrusive technology can be used lawfully. (And it’s notable that SWP has said it’s “completely committed” to the “careful development and deployment” of AFR, via BBC.)

The ruling does make it clear that any such deployments need to be more tightly bounded than the SWP application to comply with human rights law. But it has not said police use of facial recognition is inherently unlawful.

Forces also cannot ignore equality requirements by making use of such technology — there’s an obligation, per the ruling, to take steps to assess whether automated facial recognition carries a risk of bias.

Given bias problems that have been identified with such systems that may prove the bigger blocker to continued police use of this flavor of AI.

Court finds some fault with UK police force’s use of facial recognition tech

Civil rights campaigners in the UK have won a legal challenge to South Wales Police’s (SWP) use of facial recognition technology. The win on appeal is being hailed as a “world-first” victory in the fight against the use of an “oppressive surveillance tool”, as human rights group Liberty puts it.

However the police force does not intend to appeal the ruling — and has said it remains committed to “careful” use of the tech.

The back story here is SWP has been trialing automated facial recognition (AFR) technology since 2017, deploying a system known as AFR Locate on around 50 occasions between May 2017 and April 2019 at a variety of public events in Wales.

The force has used the technology in conjunction with watchlists of between 400-800 people — which included persons wanted on warrants; persons who had escaped from custody; persons suspected of having committed crimes; persons who may be in need of protection; vulnerable persons; persons of possible interest to it for intelligence purposes; and persons whose presence at a particular event causes particular concern, per a press summary issued by the appeals court.

A challenge was brought to SWP’s use of AFR by a Cardiff-based civil liberties campaigner, called Edward Bridges, with support from Liberty . Bridges was in the vicinity of two deployments of AFR Locate — first on December 21, 2017 in Cardiff city centre and again on March 27, 2018 at the Defence Procurement, Research, Technology and Exportability Exhibition taking place in the city — and while he was not himself included on a force watchlist he contends that given his proximity to the cameras his image was recorded by the system, even if deleted almost immediately after.

The human rights implications of warrantless processing of sensitive personal data by the police is the core issue in the case. The issue of bias risks that can flow from automating identity decisions is another key consideration.

Bridges initially brought a claim for judicial review on the basis that AFR was not compatible with the right to respect for private life under Article 8 of the European Convention on Human Rights, data protection legislation, and the Public Sector Equality Duty (“PSED”) under section 149 of the Equality Act 2010.

The divisional court dismissed his appeal on all grounds last September. He then appealed on five grounds — and has succeeded on three under today’s unanimous court of appeal decision.

The court judged that the legal framework and policies used by SWP did not provide clear guidance on where AFR Locate could be used and who could be put on a watchlist — finding too broad a discretion was afforded to police officers to meet the standard required by Article 8(2) of the European Convention on Human Rights.

It also found that an inadequate data protection impact assessment was carried out, given SWP had written the document on the basis of no infringement of Article 8, meaning the force had failed to comply with the UK’s Data Protection Act 2018.

The court also judged the force wrong to hold that it had complied with the PSED — because it had not taken reasonable steps to make enquiries about whether the AFR Locate software contained bias on racial or sex grounds. (Though the court noted there was no clear evidence the tool was so biased.)

Since Bridges brought the challenge London’s Met police has gone ahead and switched on operational use of facial recognition technology — flipping the switch at the start of this year. Although in its case a private company (NEC) is operating the system.

At the time of the Met announcement, Liberty branded the move “dangerous, oppressive and completely unjustified”. In a press release today it suggests the Met deployment may be unlawful for similar reasons as the SWP’s use of the tech — citing a review the force carried out. Civil liberties campaigners, AI ethicists and privacy experts have all accused the Met of ignoring the findings of an independent report which concluded it had failed to consider human rights impacts.

Commenting on today’s appeals court ruling in a statement, Liberty lawyer Megan Goulding said: “This judgment is a major victory in the fight against discriminatory and oppressive facial recognition. The Court has agreed that this dystopian surveillance tool violates our rights and threatens our liberties. Facial recognition discriminates against people of colour, and it is absolutely right that the Court found that South Wales Police had failed in their duty to investigate and avoid discrimination.

“It is time for the Government to recognise the serious dangers of this intrusive technology. Facial recognition is a threat to our freedom — it needs to be banned.”

In another supporting statement, Bridges added: “I’m delighted that the Court has agreed that facial recognition clearly threatens our rights. This technology is an intrusive and discriminatory mass surveillance tool. For three years now South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.”

However it’s important to note that he did not win his appeal on all grounds.

Notably the court held that the earlier court had correctly conducted a weighing exercise to determine whether the police force’s use of AFR was a proportionate interference with human rights law, when it considered “the actual and anticipated benefits” of AFR Locate vs the impact of the AFR deployment on Bridges — and decided that the benefits were potentially great, while the individual impact was minor, hence holding that the use of AFR was proportionate under Article 8(2).

So the UK court does not appear to have closed the door on police use of facial recognition technology entirely.

Indeed, it’s signalled that individual rights impacts can be balanced against a ‘greater good’ potential benefit — so the ruling looks more like it’s defining how such intrusive technology can be used lawfully. (And it’s notable that SWP has said it’s “completely committed” to the “careful development and deployment” of AFR, via BBC.)

The ruling does make it clear that any such deployments need to be more tightly bounded than the SWP application to comply with human rights law. But it has not said police use of facial recognition is inherently unlawful.

Forces also cannot ignore equality requirements by making use of such technology — there’s an obligation, per the ruling, to take steps to assess whether automated facial recognition carries a risk of bias.

Given bias problems that have been identified with such systems that may prove the bigger blocker to continued police use of this flavor of AI.

EU-US Privacy Shield is dead. Long live Privacy Shield

As the saying goes, insanity is doing the same thing over and over again and expecting different results.

And so we arrive at the news, put out yesterday in the horse latitudes of summer via joint press statement, that the EU’s executive body and the US Department of Commerce have begun talks toward fashioning a shiny new papier-mâché ‘Privacy Shield’.

“The U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case,” the pair write.

The EU-US Privacy Shield, as you may recall, refers to the four-year-old data transfer mechanism which Europe’s top court just sunk with the legal equivalent of a nuclear bomb.

Five years ago the same court carpet-bombed its predecessor, a fifteen-year-old arrangement known — without apparent irony — as ‘Safe Harbor’.

Thousands of companies had been signed up to the Privacy Shield, relying on the claimed legal protection to authorize transatlantic transfers of EU users’ data. The mirage collapsed on cue last month, raising legal questions over continued use of cloud services based in a third country like the US — barring data localization.

Alternative data transfer mechanisms do exist but data controllers wanting to use an alternative tool, like Standard Contractual Clauses (SCCs), to take EU citizens’ data over the pond are legally required to carry out an assessment of whether US law provides adequate protections. If they cannot guarantee the data’s safety they cannot use SCCs legally either. (And if they go ahead they are risking costly regulatory intervention.)

The fall of Privacy Shield should really have shocked no one, given the warnings, right from the get-go, that it amounted to ‘lipstick on a pig‘. Nothing has changed the fundamental problems identified by the Court of Justice of the EU in 2015 — so carrying on doing bulk data transfers to the US was headed for the same legal slapdown.

The basic problem is the mechanism failed to do what’s claimed on the tin. Which is to say EU people’s personal data is not safe as houses over there because US government security agencies have their hands in tech platforms’ cookie jars (and all the other jars and tubes of the modern Internet), as the 2013 Snowden revelations illustrated beyond doubt.

Nothing since the Snowden disclosures has substantially reworked US surveillance law to make it less incompatible with EU privacy law. President Obama made a few encouraging noises but under Trump the administration has dug in on helping itself to people’s data without a warrant. So it’s closer to a funnel than a shield.

Turns out neither a ‘Shield’ nor a ‘Harbor’ were metaphors grand enough to paper over this fundamental clash of legal priorities, when a regional trading bloc with long standing laws that protect privacy butts up against an alien regime that rubberstamps digital intrusion on national security grounds, with zero concern for privacy.

And so we arrive at the prospect of a new, papier-mâché ‘Privacy Shield II(I)’ — which looks to be the most appropriate metaphor for this latest round of EU-US ‘negotiations’ aimed at cobbling something together to buy more time for data to keep flowing. Bottom line: Even if Commission and US negotiators ink something on paper any claimed legal protections will, without root and branch reform of US surveillance law, sum to another sham headed for a speedy demolition day in court. 

It’s also worth noting that Europe’s judges are stepping on the gas in this respect, with Privacy Shield standing for just a fraction of the time Safe Harbor hung around. So any Privacy Shield II (III if you count Safe Harbor) would likely get even shorter shrift. 

Not that that legal reality and legal clarity is preventing fuzzy press soundbites from being despatched from both sides of the Atlantic, of course.

“The European Union and the United States recognize the vital importance of data protection and the significance of cross-border data transfers to our citizens and economies. We share a commitment to privacy and the rule of law, and to further deepening our economic relationship, and have collaborated on these matters for several decades,” the pair write in a fresh attempt to re-spin a legal car crash disaster that everyone could see coming, years ahead.

“As we face new challenges together, including the recovery of the global economy after the COVID-19 pandemic, our partnership will strengthen data protection and promote greater prosperity for our nearly 800 million citizens on both sides of the Atlantic.”

There’s no doubting the appetite of the Commission and the US Department of Commerce share for data to keep flowing. Both prioritize ‘business as usual’ and lionize their notion of “prosperity”, to the degree where they’re willing to turn a blind eye to rights impacts (including the Commission).

However neither side has demonstrated that it posses the political clout and influence to remake the US’ data industrial complex — which is what’s needed to meaningfully ‘enhance’ Privacy Shield. Instead, we get publicity for their next pantomime.

We’ve reached out to the Commission with questions, lots of questions.

 

Hypotenuse AI wants to take the strain out of copywriting for ecommerce

Imagine buying a dress online because a piece of code sold you on its ‘flattering, feminine flair’ — or convinced you ‘romantic floral details’ would outline your figure with ‘timeless style’. The very same day your friend buy the same dress from the same website but she’s sold on a description of ‘vibrant tones’, ‘fresh cotton feel’ and ‘statement sleeves’.

This is not a detail from a sci-fi short story but the reality and big picture vision of Hypotenuse AI, a YC-backed startup that’s using computer vision and machine learning to automate product descriptions for ecommerce.

One of the two product descriptions shown below is written by a human copywriter. The other flowed from the virtual pen of the startup’s AI, per an example on its website.

Can you guess which is which?* And if you think you can — well, does it matter?

Screengrab: Hypotenuse AI’s website

Discussing his startup on the phone from Singapore, Hypotenuse AI’s founder Joshua Wong tells us he came up with the idea to use AI to automate copywriting after helping a friend set up a website selling vegan soap.

“It took forever to write effective copy. We were extremely frustrated with the process when all we wanted to do was to sell products,” he explains. “But we knew how much description and copy affect conversions and SEO so we couldn’t abandon it.”

Wong had been working for Amazon, as an applied machine learning scientist for its Alexa AI assistant. So he had the technical smarts to tackle the problem himself. “I decided to use my background in machine learning to kind of automate this process. And I wanted to make sure I could help other ecommerce stores do the same as well,” he says, going on to leave his job at Amazon in June to go full time on Hypotenuse.

The core tech here — computer vision and natural language generation — is extremely cutting edge, per Wong.

“What the technology looks like in the backend is that a lot of it is proprietary,” he says. “We use computer vision to understand product images really well. And we use this together with any metadata that the product already has to generate a very ‘human fluent’ type of description. We can do this really quickly — we can generate thousands of them within seconds.”

“A lot of the work went into making sure we had machine learning models or neural network models that could speak very fluently in a very human-like manner. For that we have models that have kind of learnt how to understand and to write English really, really well. They’ve been trained on the Internet and all over the web so they understand language very well. “Then we combine that together with our vision models so that we can generate very fluent description,” he adds.

Image credit: Hypotenuse

Wong says the startup is building its own proprietary data-set to further help with training language models — with the aim of being able to generate something that’s “very specific to the image” but also “specific to the company’s brand and writing style” so the output can be hyper tailored to the customer’s needs.

“We also have defaults of style — if they want text to be more narrative, or poetic, or luxurious —  but the more interesting one is when companies want it to be tailored to their own type of branding of writing and style,” he adds. “They usually provide us with some examples of descriptions that they already have… and we used that and get our models to learn that type of language so it can write in that manner.”

What Hypotenuse’s AI is able to do — generate thousands of specifically detailed, appropriately styled product descriptions within “seconds” — has only been possible in very recent years, per Wong. Though he won’t be drawn into laying out more architectural details, beyond saying the tech is “completely neural network-based, natural language generation model”.

“The product descriptions that we are doing now — the techniques, the data and the way that we’re doing it — these techniques were not around just like over a year ago,” he claims. “A lot of the companies that tried to do this over a year ago always used pre-written templates. Because, back then, when we tried to use neural network models or purely machine learning models they can go off course very quickly or they’re not very good at producing language which is almost indistinguishable from human.

“Whereas now… we see that people cannot even tell which was written by AI and which by human. And that wouldn’t have been the case a year ago.”

(See the above example again. Is A or B the robotic pen? The Answer is at the foot of this post)

Asked about competitors, Wong again draws a distinction between Hypotenuse’s ‘pure’ machine learning approach and others who relied on using templates “to tackle this problem of copywriting or product descriptions”.

“They’ve always used some form of templates or just joining together synonyms. And the problem is it’s still very tedious to write templates. It makes the descriptions sound very unnatural or repetitive. And instead of helping conversions that actually hurts conversions and SEO,” he argues. “Whereas for us we use a completely machine learning based model which has learnt how to understand language and produce text very fluently, to a human level.”

There are now some pretty high profile applications of AI that enable you to generate similar text to your input data — but Wong contends they’re just not specific enough for a copywriting business purpose to represent a competitive threat to what he’s building with Hypotenuse.

“A lot of these are still very generalized,” he argues. “They’re really great at doing a lot of things okay but for copywriting it’s actually quite a nuanced space in that people want very specific things — it has to be specific to the brand, it has to be specific to the style of writing. Otherwise it doesn’t make sense. It hurts conversions. It hurts SEO. So… we don’t worry much about competitors. We spent a lot of time and research into getting these nuances and details right so we’re able to produce things that are exactly what customers want.”

So what types of products doesn’t Hypotenuse’s AI work well for? Wong says it’s a bit less relevant for certain product categories — such as electronics. This is because the marketing focus there is on specs, rather than trying to evoke a mood or feeling to seal a sale. Beyond that he argues the tool has broad relevance for ecommerce. “What we’re targeting it more at is things like furniture, things like fashion, apparel, things where you want to create a feeling in a user so they are convinced of why this product can help them,” he adds.

The startup’s SaaS offering as it is now — targeted at automating product description for ecommerce sites and for copywriting shops — is actually a reconfiguration itself.

The initial idea was to build a “digital personal shopper” to personalize the ecommerce experence. But the team realized they were getting ahead of themselves. “We only started focusing on this two weeks ago — but we’ve already started working with a number of ecommerce companies as well as piloting with a few copywriting companies,” says Wong, discussing this initial pivot.

Building a digital personal shopper is still on the roadmap but he says they realized that a subset of creating all the necessary AI/CV components for the more complex ‘digital shopper’ proposition was solving the copywriting issue. Hence dialling back to focus in on that.

“We realized that this alone was really such a huge pain-point that we really just wanted to focus on it and make sure we solve it really well for our customers,” he adds.

For early adopter customers the process right now involves a little light onboarding — typically a call to chat through their workflow is like and writing style so Hypotenuse can prep its models. Wong says the training process then takes “a few days”. After which they plug in to it as software as a service.

Customers upload product images to Hypotenuse’s platform or send metadata of existing products — getting corresponding descriptions back for download. The plan is to offer a more polished pipeline process for this in the future — such as by integrating with ecommerce platforms like Shopify .

Given the chaotic sprawl of Amazon’s marketplace, where product descriptions can vary wildly from extensively detailed screeds to the hyper sparse and/or cryptic, there could be a sizeable opportunity to sell automated product descriptions back to Wong’s former employer. And maybe even bag some strategic investment before then…  However Wong won’t be drawn on whether or not Hypotenuse is fundraising right now.

On the possibility of bagging Amazon as a future customer he’ll only say “potentially in the long run that’s possible”.

Joshua Wong (Photo credit: Hypotenuse AI)

The more immediate priorities for the startup are expanding the range of copywriting its AI can offer — to include additional formats such as advertising copy and even some ‘listicle’ style blog posts which can stand in as content marketing (unsophisticated stuff, along the lines of ’10 things you can do at the beach’, per Wong, or ’10 great dresses for summer’ etc).

“Even as we want to go into blog posts we’re still completely focused on the ecommerce space,” he adds. “We won’t go out to news articles or anything like that. We think that that is still something that cannot be fully automated yet.”

Looking further ahead he dangles the possibility of the AI enabling infinitely customizable marketing copy — meaning a website could parse a visitor’s data footprint and generate dynamic product descriptions intended to appeal to that particular individual.

Crunch enough user data and maybe it could spot that a site visitor has a preference for vivid colors and like to wear large hats — ergo, it could dial up relevant elements in product descriptions to better mesh with that person’s tastes.

“We want to make the whole process of starting an ecommerce website super simple. So it’s not just copywriting as well — but all the difference aspects of it,” Wong goes on. “The key thing is we want to go towards personalization. Right now ecommerce customers are all seeing the same standard written content. One of the challenges there it’s hard because humans are writing it right now and you can only produce one type of copy — and if you want to test it for other kinds of users you need to write another one.

“Whereas for us if we can do this process really well, and we are automating it, we can produce thousands of different kinds of description and copy for a website and every customer could see something different.”

It’s a disruptive vision for ecommerce that is likely to either delight or terrify — depending on your view of current levels of platform personalization around content. That process can wrap users in particular bubbles of perspective — and some argue such filtering has impacted culture and politics by having a corrosive impact on the communal experiences and consensus which underpins the social contract. But the stakes with ecommerce copy aren’t likely to be so high.

Still, once marketing text/copy no longer has a unit-specific production cost attached to it — and assuming ecommerce sites have access to enough user data in order to program tailored product descriptions — there’s no real limit to the ways in which robotically generated words could be reconfigured in the pursuit of a quick sale.

“Even within a brand there is actually a factor we can tweak which is how creative our model is,” says Wong, when asked if there’s any risk of the robot’s copy ending up feeling formulaic. “Some of our brands have like 50 polo shirts and all of them are almost exactly the same, other than maybe slight differences in the color. We are able to produce very unique and very different types of descriptions for each of them when we cue up the creativity of our model.”

“In a way it’s sometimes even better than a human because humans tends to fall into very, very similar ways of writing. Whereas this — because it’s learnt so much language over the web — it has a much wider range of tones and types of language that it can run through,” he adds.

What about copywriting and ad creative jobs? Isn’t Hypotenuse taking an axe to the very copywriting agencies his startup is hoping to woo as customers? Not so, argues Wong. “At the end of the day there are still editors. The AI helps them get to 95% of the way there. It helps them spark creativity when you produce the description but that last step of making sure it is something that exactly the customer wants — that’s usually still a final editor check,” he says, advocating for the human in the AI loop. “It only helps to make things much faster for them. But we still make sure there’s that last step of a human checking before they send it off.”

“Seeing the way NLP [natural language processing] research has changed over the past few years it feels like we’re really at an inception point,” Wong adds. “One year ago a lot of the things that we are doing now was not even possible. And some of the things that we see are becoming possible today — we didn’t expect it for one or two years’ time. So I think it could be, within the next few years, where we have models that are not just able to write language very well but you can almost speak to it and give it some information and it can generate these things on the go.”

*Per Wong, Hypotenuse’s robot is responsible for generating description ‘A’. Full marks if you could spot the AI’s tonal pitfalls

Facebook extends coronavirus work from home policy until July 2021

Facebook has joined Google in saying it will allow employees to work from home until the middle of next year as a result of the coronavirus pandemic.

“Based on guidance from health and government experts, as well as decisions drawn from our internal discussions about these matters, we are allowing employees to continue voluntarily working from home until July 2021,” a spokeswoman told the Reuters news agency.

Facebook also said it will provide employees with an additional $1,000 to spend on “home office needs”.

Late last month Google also extended its coronavirus remote work provision, saying staff would be able to continue working from home until the end of June 2021.

Both tech giants have major office presences in a number of cities around the world. And despite the pandemic forcing them into offering more flexible working arrangement than they usually do the pair have continued to build out their physical office footprints, signalling a commitment to operating their own workplaces. (Perhaps unsurprisingly, given how much money they’ve ploughed in over the years to turn offices into perk-filled playgrounds designed to keep staff on site for longer — with benefits such as free snacks and meals, nap pods, video games arcade rooms and even health centers.)

Earlier this month, Facebook secured the main office lease on an iconic building in New York, for example — adding 730,000 square feet to its existing 2.2 million square feet of office space. While Google has continued to push ahead with a flagship development in the UK capital’s King’s Cross area, with work resuming last month on the site for its planned London ‘landscraper’ HQ.

In late July, Apple said staff won’t return to offices until at least early 2021 — cautioning that any return to physical offices would depend on whether an effective vaccine and/or successful therapeutics are available. So the iPhone maker looks prepared for a home-working coronavirus long haul.

As questions swirl over the future of the physical office now that human contact is itself a public health risk, the deepest pocketed tech giants are paradoxically showing they’re not willing to abandon the traditional workplace altogether and go all in on modern technologies which allow office work to be done remotely.

Twitter is an exception. During the first wave of the pandemic the social network firmly and fully embraced remote work, telling staff back in May that they can work from home forever if they wish.

Whether remote work played any role in the company’s recent account breach is one open question. It has said phone spear phishing was used to trick staff to gain network access credentials.

Certainly, security concerns have been generally raised about the risk of more staff working remotely during the pandemic — where they may be outside a corporate firewall and more vulnerable to attackers.

A Facebook spokeswoman did not respond when we asked whether the company will offer its own staff the option to work remotely permanently. But the company does not appear prepared to go so far — not least judging by signing new leases on massive office spaces.

Facebook has been retooling its approach to physical offices in the wake of the COVID-19 pandemic, announcing in May it would be setting up new company hubs in Denver, Dallas and Atlanta.

It also said it would focus on finding new hires in areas near its existing offices — including in cities such as San Diego, Portland, Philadelphia and Pittsburgh.

Facebook CEO Mark Zuckerberg said then that over the course of the next decade half of the company could be working fully remotely. Though he said certain kinds of roles would not be eligible for all-remote work — such as those doing work in divisions like hardware development, data centers, recruiting, policy and partnerships.

UK reported to be ditching coronavirus contacts tracing in favor of ‘risk rating’ app

What’s going on with the UK’s coronavirus contacts tracing app? Reports in the national press today suggest a launch of the much delayed software will happen this month but also that the app will no longer be able to automatically carry out contacts tracing.

The Times reports that a repackaged version of the app will only provide users with information about infection levels in their local area. The newspaper also suggests the app will let users provide personal data in order to calculate a personal risk score.

The Mail also reports that the scaled back software will not be able to carry out automated contacts tracing.

We’ve reached out to the Department for Health and Social Care (DHSC) with questions and will update this report with any response. DHSC is the government department leading development of the software, after the NHS’s digital division handed the app off.

As the coronavirus pandemic spread around the world this year, digital contacts tracing has been looked to as a modern tool to COVID-19 by leveraging the near ubiquity of smartphones to try to understand individual infection risk based on device proximity.

In the UK, an earlier attempt to launch an NHS COVID-19 app to support efforts to contain the virus by automating exposure notifications using Bluetooth signals faltered after the government opted for a model that centralized exposure data. This triggered privacy concerns and meant it could not plug into an API offered by Apple and Google — whose tech supports decentralized coronavirus contacts tracing apps.

At the same time, multiple countries and regions in Europe have launched decentralized contacts tracing apps this year. These apps use Bluetooth signals as a proxy for calculating exposure risk — crunching data on device for privacy reasons — including, most recently, Northern Ireland, which is part of the UK.

However in the UK’s case, after initially heavily publicizing the forthcoming app — and urging the public to download it in its daily coronavirus briefings (despite the app not being available nationwide) —  the government appears to have stepped almost entirely away from digital contacts tracing, claiming the Apple -Google API does not provide enough data to accurately calculate exposure risk via Bluetooth.

Decentralized Bluetooth coronavirus contacts tracing apps that are up and running elsewhere Europe have reported total downloads and sometimes other bits of data. But there’s been no comprehensive assessment of how well they’re functioning as a COVID-fighting tool.

There have been some reports of bugs impacting operation in some cases, too. So it’s tricky to measure efficacy. Although the bald fact remains that having an app means there’s at least a chance it could identify contacts otherwise unknown to users, vs having no app and so no chance of that.

The Republic of Ireland is one of the European countries with a decentralized coronavirus contacts tracing app (which means it can interoperate with Northern Ireland’s app) — and it has defended how well the software is functioning, telling the BBC last month that 91 people had received a “close contact exposure alert” since launch. Although it’s not clear how many of them wouldn’t have been picked up via manual contacts tracing methods.

A government policy paper published at the end of last month which discussed the forthcoming DHSC app said it would allow citizens to: identify symptoms; order a test; and “feel supported” if they needed to self isolate. It would also let people scan a QR codes at venues they’ve visited “to aid contact tracing and help understand the spread of the virus”.

The government paper also claimed the app would let users “quickly identify when they have been exposed to people who have COVID-19 or locations that may have been the source of multiple infections” — but without providing details of how that would be achieved.

“Any services that require more information from a citizen will be provided only on the basis of explicit consent,” it added.

Ahead of the launch of this repackaged app it’s notable that DHSC disbanded an ethics committee which had been put in place to advise the NHS on the app. Once development was handed over to the government, the committee was thanked for its time and sent on its way.

Speaking to BBC Radio 4’s World at One program today, professor Lilian Edwards — who was a member of the ethics committee — expressed concern at the reports of the government’s latest plans for the app.

“Although the data collection is being presented as voluntary it’s completely non-privacy preserving,” she told the program, discussing The Times’ report which suggests users will be nudged to provide personal data with the carrot of a ‘personal risk score’. “It’s going to involve the collection of a lot of personal, sensitive data — perhaps your health status, your retirement status, your occupation etc.

“This seems, again, an odd approach given that we know one of the reasons why the previous app didn’t really take off was because there was rather a loss of public trust and confidence in it, because of the worries partly about privacy and about data collection — it not being this privacy-preserving decentralized approach.”

“To mix the two up seems a strange way to go forward to me in terms of restoring and embedding that trust and confidence that your data won’t be shared with people you don’t want it to be,” Edwards added. “Like maybe insurers. Or repurposed in ways that you don’t know about. So it seems rather contrary to the mission of restoring trust and confidence in the whole test and trace endeavour.”

Concerns have also been raised about another element of the government’s digital response to the coronavirus — after it rushed to ink contracts with a number of tech giants, including Palantir and Google, granting them access to NHS data.

It was far less keen to publish details of these contracts — requiring a legal challenge by Open Democracy, which is warning over the impact of “Silicon Valley thinking” applied to public health services.

In another concerning development, privacy experts warned recently that the UK’s test and trace program as a whole breaches national data protection laws, after it emerged last month that the government failed to carry out a legally required privacy impact assessment ahead of launch.

TikTok announces first data center in Europe

TikTok, the Chinese video sharing app that’s found itself at the center of a geopolitical power struggle which threatens to put hard limits on its global growth this year, said today it will build its first data center in Europe.

The announcement of a TikTok data center in the EU also follows a landmark ruling by Europe’s top court last month that put international data transfers in the spotlight, dialling up the legal risk around processing data outside the bloc.

TikTok said the forthcoming data center, which will be located in Ireland, will store the data of its European users once it’s up and running (which is expected by early 2022) — with a slated investment into the country of around €420M (~$497M), according to a blog post penned by global CISO, Roland Cloutier.

“This investment in Ireland… will create hundreds of new jobs and play a key role in further strengthening the safeguarding and protection of TikTok user data, with a state of the art physical and network security defense system planned around this new operation,” Cloutier wrote, adding that the regional data centre will have the added boon for European users of faster load times, improving the overall experience of using the app.

The social media app does not break out regional users — but a leaked ad deck suggested it had 17M+ MAUs in Europe at the start of last year.

The flipside of TikTok’s rise to hot social media app beloved of teens everywhere has been earning itself the ire of US president Trump — who earlier this month threatened to use executive powers to ban TikTok in the US unless it sells its US business to an American company. (Microsoft is in the frame as a buyer.)

Whether Trump has the power to block TikTok’s app is debatable. Tech savvy teenagers will surely deploy all their smarts to get around any geoblocks. But operational disruption looks inevitable — and that has been forcing TikTok to make a series of strategic tweaks in a bid to limit damage and/or avoid the very worst outcomes.

Since taking office the US president has shown himself willing to make international business extremely difficult for Chinese tech firms. In the case of mobile device and network kit maker, Huawei, Trump has limited domestic use of its tech and leant on allies to lock it out of their 5G networks (with some success) — citing national security concerns from links to the Chinese Communist Party.

His beef with TikTok is the same stated national security concerns, centered on its access to user data. (Though Trump may have his own personal reasons to dislike the app.)

TikTok, like every major social media app, gathers huge amounts of user data — which its privacy policy specifies it may share user data with third parties, including to fulfil “government inquiries”. So while its appetite for personal data looks much the same as US social media giants (like Facebook) its parent company, Beijing-based ByteDance, is subject to China’s Internet Security Law — which since 2017 has given the Chinese Chinese Communist Party sweeping powers to obtain data from digital companies. And while the US has its own intrusive digital surveillance laws, the existence of a Chinese mirror of the US state-linked data industrial complex has put tech firms right at the heart of geopolitics.

TikTok has been taking steps to try to insulate its international business from US-fuelled security concerns — and also provide some incentives to Trump for not quashing it — hiring Disney executive Kevin Mayer on as CEO of TikTok and COO of ByteDance in May, and promising to create 10,000 jobs in the U.S., as well as claiming US user data is stored in the US.

In parallel it’s been reconfiguring how it operates in Europe, setting up an EMEA Trust and Safety Hub in Dublin, Ireland at the start of this year and building out its team on the ground. In June it also updated its regional terms of service — naming its Irish subsidiary as the local data controller alongside its UK entity, meaning European users’ data no longer falls under its US entity, TikTok Inc.

This reflects distinct rules around personal data which apply across the European Union and European Economic Area. So while European political leaders have not been actively attacking TikTok in the same way as Trump, the company still faces increased legal risk in the region.

Last month CJEU judges made it clear that data transfers to third countries can only be legal if EU users’ data is not being put at risk by problematic surveillance laws and practices. The CJEU ruling (aka ‘Schrems II’) means data processing in countries such as China and India — and, indeed, the US — are now firmly in the risk frame where EU data protection law is concerned.

One way of avoiding this risk is to process European users’ data locally. So TikTok opening a data center in Ireland may also be a response to Schrems II — in that it will offer a way for it to ensure it can comply with requirements flowing from the ruling.

Privacy commentators have suggested the CJEU decision may accelerate data localization efforts — a trend that’s also being seen in countries such as China and Russia (and, under Trump, the US too it seems).

EU data watchdogs have also warned there will be no grace period following the CJEU invalidating the US-EU Privacy Shield data transfer mechanism. While those using other still valid tools for international transfers are bound to carry out an assessment — and either suspend data flows if they identify risks or inform a supervisor that the data is still flowing (which could in turn trigger an investigation).

The EU’s data protection framework, GDPR, bakes in stiff penalties for violations — with fines that can hit 4% of a company’s global annual turnover. So the business risk around EU data protection is no longer small, even as wider geopolitical risks are upping the uncertainty for global Internet players.

“Protecting our community’s privacy and data is and will continue to be our priority,” TikTok’s CISO writes, adding: “Today’s announcement is just the latest part of our ongoing work to enhance our global capability and efforts to protect our users and the TikTok community.”

TikTok announces first data center in Europe

TikTok, the Chinese video sharing app that’s found itself at the center of a geopolitical power struggle which threatens to put hard limits on its global growth this year, said today it will build its first data center in Europe.

The announcement of a TikTok data center in the EU also follows a landmark ruling by Europe’s top court last month that put international data transfers in the spotlight, dialling up the legal risk around processing data outside the bloc.

TikTok said the forthcoming data center, which will be located in Ireland, will store the data of its European users once it’s up and running (which is expected by early 2022) — with a slated investment into the country of around €420M (~$497M), according to a blog post penned by global CISO, Roland Cloutier.

“This investment in Ireland… will create hundreds of new jobs and play a key role in further strengthening the safeguarding and protection of TikTok user data, with a state of the art physical and network security defense system planned around this new operation,” Cloutier wrote, adding that the regional data centre will have the added boon for European users of faster load times, improving the overall experience of using the app.

The social media app does not break out regional users — but a leaked ad deck suggested it had 17M+ MAUs in Europe at the start of last year.

The flipside of TikTok’s rise to hot social media app beloved of teens everywhere has been earning itself the ire of US president Trump — who earlier this month threatened to use executive powers to ban TikTok in the US unless it sells its US business to an American company. (Microsoft is in the frame as a buyer.)

Whether Trump has the power to block TikTok’s app is debatable. Tech savvy teenagers will surely deploy all their smarts to get around any geoblocks. But operational disruption looks inevitable — and that has been forcing TikTok to make a series of strategic tweaks in a bid to limit damage and/or avoid the very worst outcomes.

Since taking office the US president has shown himself willing to make international business extremely difficult for Chinese tech firms. In the case of mobile device and network kit maker, Huawei, Trump has limited domestic use of its tech and leant on allies to lock it out of their 5G networks (with some success) — citing national security concerns from links to the Chinese Communist Party.

His beef with TikTok is the same stated national security concerns, centered on its access to user data. (Though Trump may have his own personal reasons to dislike the app.)

TikTok, like every major social media app, gathers huge amounts of user data — which its privacy policy specifies it may share user data with third parties, including to fulfil “government inquiries”. So while its appetite for personal data looks much the same as US social media giants (like Facebook) its parent company, Beijing-based ByteDance, is subject to China’s Internet Security Law — which since 2017 has given the Chinese Chinese Communist Party sweeping powers to obtain data from digital companies. And while the US has its own intrusive digital surveillance laws, the existence of a Chinese mirror of the US state-linked data industrial complex has put tech firms right at the heart of geopolitics.

TikTok has been taking steps to try to insulate its international business from US-fuelled security concerns — and also provide some incentives to Trump for not quashing it — hiring Disney executive Kevin Mayer on as CEO of TikTok and COO of ByteDance in May, and promising to create 10,000 jobs in the U.S., as well as claiming US user data is stored in the US.

In parallel it’s been reconfiguring how it operates in Europe, setting up an EMEA Trust and Safety Hub in Dublin, Ireland at the start of this year and building out its team on the ground. In June it also updated its regional terms of service — naming its Irish subsidiary as the local data controller alongside its UK entity, meaning European users’ data no longer falls under its US entity, TikTok Inc.

This reflects distinct rules around personal data which apply across the European Union and European Economic Area. So while European political leaders have not been actively attacking TikTok in the same way as Trump, the company still faces increased legal risk in the region.

Last month CJEU judges made it clear that data transfers to third countries can only be legal if EU users’ data is not being put at risk by problematic surveillance laws and practices. The CJEU ruling (aka ‘Schrems II’) means data processing in countries such as China and India — and, indeed, the US — are now firmly in the risk frame where EU data protection law is concerned.

One way of avoiding this risk is to process European users’ data locally. So TikTok opening a data center in Ireland may also be a response to Schrems II — in that it will offer a way for it to ensure it can comply with requirements flowing from the ruling.

Privacy commentators have suggested the CJEU decision may accelerate data localization efforts — a trend that’s also being seen in countries such as China and Russia (and, under Trump, the US too it seems).

EU data watchdogs have also warned there will be no grace period following the CJEU invalidating the US-EU Privacy Shield data transfer mechanism. While those using other still valid tools for international transfers are bound to carry out an assessment — and either suspend data flows if they identify risks or inform a supervisor that the data is still flowing (which could in turn trigger an investigation).

The EU’s data protection framework, GDPR, bakes in stiff penalties for violations — with fines that can hit 4% of a company’s global annual turnover. So the business risk around EU data protection is no longer small, even as wider geopolitical risks are upping the uncertainty for global Internet players.

“Protecting our community’s privacy and data is and will continue to be our priority,” TikTok’s CISO writes, adding: “Today’s announcement is just the latest part of our ongoing work to enhance our global capability and efforts to protect our users and the TikTok community.”