Twitter has announced a new Crisis Misinformation Policy. This policy aims to stop the spread of false information during major events. These events could be natural disasters, conflicts, or public health emergencies. Twitter believes misinformation in these situations can cause real harm. People might make unsafe choices because of bad information. The company wants to help people find reliable sources quickly during a crisis.
(Twitter Introduces Crisis Misinformation Policy)
The new rules will be used only when a situation meets certain criteria. Twitter will look for evidence of widespread misleading content. They will also see if the content poses a serious threat to public safety. The company will work with trusted sources to identify these crises. These sources include journalists, humanitarian groups, and government agencies.
(Twitter Introduces Crisis Misinformation Policy)
When a crisis is happening, Twitter will take specific actions. They will prioritize showing credible information in search results and timelines. Tweets containing misleading claims will get a warning label. People will see this label before they can view the tweet. Twitter might also reduce how many people see these tweets. They will not recommend or amplify them. The company will apply these rules to content from all accounts, including high-profile users. They will focus on clearly false claims that could cause physical harm. Examples include false details about where to find shelter or medical help. Twitter stated it is vital to protect people when accurate information matters most. The policy builds on their existing safety rules. They consulted experts to develop this approach. Twitter will keep improving these measures as needed.

