"AYS?", a first-of-its-kind feature in the dating space, has already reduced inappropriate language in messages sent by more than 10 per cent in early testing, the company said in a statement on Thursday.
It uses AI to detect harmful language and proactively intervenes to warn the sender their message may be offensive, asking them to pause before hitting send.
The AI was built based on what members have reported in the past, and it will continue to evolve and improve over time.
"The early results from these features show us that intervention done the right way can be really meaningful in changing behaviour and building a community where everyone feels like they can be themselves," said Tracey Breeden, Head of Safety and Social Advocacy for Match Group.
"AYS?" joins the suite of harm reduction tools Tinder already has in place, including "Does This Bother You?", which provides proactive support to members when harmful language is detected in a message they received, all of which have contributed to more matches and longer conversations during the app's busiest year yet.
Tinder's long-standing commitment to safety started with the Swipe, ultimately requiring mutual interest to send a message.
Over the past several years, the app has worked with the Match Group Advisory Council (MGAC) to continue building best-in-class features in the Safety space.