An internal memo sent to staff by TikTok CEO Zi Shou Chew and confirmed by TikTok indicates that the company is consolidating its Core Product and Trust & Safety teams into a single organization, as the company’s future in the U.S. remains uncertain.
In the memo, Chew stated, “This new team will enable us to more effectively utilize our technical capabilities in order to achieve both business and safety objectives, as well as to develop the next generation of safety technology at a faster pace.”
As part of the modifications, the memo stated that Adam Presser, the current chief of Operations and Trust & Safety at TikTok, will be appointed as the General Manager of TikTok USDS. TikTok USDS is a distinct entity that is responsible for protecting U.S. national security interests. TikTok confirmed this information to TechCrunch.
The current General Manager of TikTok USDS, Andy Bonillo, who oversaw the organization’s creation and development, will transition to a new role as Senior Advisor, reporting to Presser.
Presser will continue to supervise the TikTok Operations teams, while Grover will assume the role of Global Trust & Safety director.
In the interim, Jenny Zi will assume the role of leader for TikTok LIVE.
The TikTok termination deadline was extended for the third time by President Trump last month, prompting the reorganization. September 17 is the current deadline.
U.S. Last week, Commerce Secretary Howard Lutnick declared that TikTok would cease operations in the country if China fails to sanction a sale agreement for the application.
As part of the transaction, Lutnick also asserted that the U.S. must regulate the algorithm of the app.
It is not remarkable that TikTok is attempting to strengthen its USDS team in light of the fraught situation, which is occurring in the midst of a tariff conflict with China, as its U.S. agreement is in jeopardy.
“The T&S Product team will now be a part of the TikTok Product organization in order to achieve greater alignment across our robust trust and safety programs,” wrote Chew.
“We are establishing a new Platform Responsibility team, which will be headed by Adam Wang and will report to Fiona Zhi.” Over the past four years, Adam has effectively overseen the global launch and expansion of TikTok LIVE.
“I am certain that this reorganization will enhance our ability to align with the opportunities that lie ahead,” the memo stated. The obstacles that decentralized social platforms encounter are elaborated upon by Twitter’s former director of Trust & Safety.
Yoel Roth, who was previously the director of Twitter’s Trust and Safety and is currently employed at Match, is expressing his apprehensions regarding the future of the open social web and its capacity to address misinformation, spam, and other unlawful content, such as child sexual abuse material (CSAM).
In a recent interview, Roth expressed concern regarding the absence of moderation tools for the fediverse, which is the open social web that encompasses applications such as Mastodon, Threads, and Pixelfed, as well as other open platforms like Bluesky.
He also reflected on significant events in Twitter’s Trust & Safety department, such as the decision to prohibit President Trump from using the platform, the dissemination of misinformation by Russian bot farms, and the vulnerability of Twitter’s own users, including CEO Jack Dorsey, to bots.
Roth noted on the podcast revolution.social with @Rabble that the open social web’s efforts to establish more democratically run online communities are also those with the fewest resources in terms of moderation tools.
“…when we examined Mastodon, other services based on the ActivityPub protocol, Bluesky in its early days, and Threads as Meta began to develop it, we observed that many of the services that were most committed to community-based control provided their communities with the fewest technical tools to enable them to administer their policies,” Roth stated.
Additionally, he observed a “substantial decline” in the transparency and legitimacy of Twitter’s decisions on the open social web.
Although Twitter’s decision to prohibit Trump was controversial at the time, the company provided a justification for its decision.
Currently, social media providers are so preoccupied with the prevention of malicious actors from exploiting their platforms that they rarely provide an explanation of their own.
However, on numerous open social platforms, users were not informed of their banned posts, and they would simply dissolve without any indication to others that the post had existed.
“I do not fault startups for their status as startups or new software for their lack of features. However, if the primary objective of the project was to enhance the democratic legitimacy of governance, and all we have accomplished is to reverse the process, has this been successful?” Roth is perplexed.
For example, IFTAS (Independent Federated Trust & Safety) was engaged in the development of moderation tools for the fediverse, which included the provision of tools to combat CSAM.
However, the organization was unable to continue with many of its initiatives due to a lack of funding in 2025.
“We anticipated it two years ago.” IFTAS anticipated the event. He explained that the time and efforts of all those involved in this field are largely donated, but this can only be sustained to a certain extent.
At some point, individuals must pay expenses, have families, and compute costs that accumulate when it is necessary to run ML models to identify specific categories of harmful content. “The economics of this federated approach to trust and safety never quite added up, and it just all gets expensive.” “And in my opinion, they still do not.”
In contrast, Bluesky has elected to employ moderators and recruit in accordance with its commitment to trust and safety. However, it restricts itself to the moderation of its own application.
Additionally, they are offering tools that enable individuals to personalize their moderation preferences.
“They are conducting this work on a large scale.” It is evident that there is room for development. I would appreciate it if they were more forthcoming. “However, they are fundamentally conducting themselves appropriately,” Roth stated.
He acknowledges that Bluesky will encounter inquiries regarding the appropriate balance between the community’s requirements and the protection of the individual as the service continues to decentralize.
However, it is still the responsibility of an individual to enforce those safeguards, even if the user is not utilizing the primary Bluesky application.
The Fediverse is also confronted with the challenge of thwarting moderation attempts due to the decision to prioritize privacy.
Twitter endeavored to avoid storing confidential data that was unnecessary; however, it continued to accumulate information such as the user’s IP address, the date and time of their service access, and device identifiers.
These were beneficial to the organization when it required to conduct a forensic analysis of an entity such as a Russian disinformation farm.
In contrast, Federivse administrators may not be accumulating the requisite records or may refuse to view them if they believe it to be a violation of user privacy.
However, the truth is that it is more difficult to determine who is an algorithm in the absence of data.
Roth provided a few examples of this from his time on Twitter, observing that it became a fad for users to respond with a “bot” to anyone with whom they disagreed.
He claims that he initially established an alert and manually evaluated all of these posts, scrutinizing hundreds of instances of “bot” accusations.
He found that no one was ever in the wrong. Jack Dorsey, the former CEO and co-founder of Twitter, was also the victim of this scam. He retweeted posts from a Russian actor who falsely identified himself as Crystal Johnson, a Black woman from New York.
Roth stated, “The CEO of the company enjoyed this content, amplified it, and was unaware that Crystal Johnson was a Russian troll as a user.”
The landscape was a pertinent topic of discussion, as AI was transforming it. Roth cited recent research from Stanford that demonstrated that, when properly calibrated, large language models (LLMs) could be more persuasive than humans in a political context. This implies that a solution that exclusively depends on content analysis is inadequate.
Companies should instead monitor alternative behavioral indicators, such as the creation of multiple accounts, the use of automation to post, or the publication of content at odd hours that correspond to various time zones, as he recommended.
“Even in content that is highly persuasive, these are latent behavioral signals.” “And I believe that is the appropriate place to begin,” Roth stated. “If you begin with the content, you are already at a disadvantage in an arms race against the most advanced AI models.”