How to deal with scammers on dating apps? Throw them the robots
It’s been nearly 10 years since swiping left became the gesture of choice for millions of daters with the advent of Tinder. Ever since this app first appeared on September 12, 2012, it seems every app has tried to find the best way for users to browse potential dates at a pace that can setting the best-meaning thumbs on fire.
Although it’s been a decade, what so many of these apps still struggle with is manage bot, spam and scam accounts. Gizmodo has previously reported on the thousands of people who told the FTC they were scammed through Tinder during the pandemic. Innocent people looking for love have reported being scammed out of tens of thousands of dollars, being tricked into providing credit card information, or even being threatened after refusing to pay their fees. It took Tinder until last year to introduce identity verification to most global users of the app, although it remains voluntary for the majority of users. The idea of being cheated on a dating app even got a world premiere thanks to the popular Netflix documentary The Tinder scammer.
Bots and scammers are rampant on dating apps. A study 2017 published by researchers at the University of Southern California pointed out that determining whether a user is a bot is particularly difficult because there are few ways to view users’ profiles without interacting with them. These accounts often seem more legit than not, with original photos and other social media accounts. Scammers are even tougher because even when you delete a predator account, they can easily come back to the platform with an entirely different identity.
Well, a dating app had a new approach to dealing with scammers and bots on their platform – turning them against each other. In a August blog post, co-founder of video-centric dating site startup Filter Off, Brian Weinreich, said that when a suspected scammer first signs up to the site, they’re placed in a so-called “Dark Dating Pool” away from other users. The developer said his small team flooded the pool full of Based on GPT-3 chatbots and collected the most hilarious examples of scammers attempting to scam a being with no compassion or love (sorry, but the AI just isn’t there yet).
Weinreich wrote that all chats are encrypted and that they “err on the side of caution” when placing users in the dark dating pool, which could mean some potential scammers are sneaking in. In an interview Wednesday with Tech Crunch, Weinreich said he used algorithms that sort accounts based on how fraudulent users most often sign up for the app. Oddly enough, these scammers will apparently try to scam each other, arguing over who should send a $40 ($56) gift card.
“We probably have over 1,000 scammers that I know of who are only actively talking to bots,” Weinrich told TechCrunch.
Although Gizmodo could not independently verify much of what the developer claims, reading these posted chat logs between bot and scammer is like watching the Aliens vs Predator of the scummy dating scene. Bots are full of preset auto-responses that even a basic question resulted in repeated responses or even directly contradictory responses. How does a bot respond to “Are you on WhatsApp?” Well, first he says “no”, then “no”, then “no” and finally “yes”.
Here are some of the best clips we found from when a scummy scammer met his match against the most stubborn adversary imaginable: an abandoned AI bot.
“No, I’m not on WhatsApp… No, I’m not… I’m not…”
How a bot respond to a simple question, as a request to know if you are using another application? Well, the implication is that the scammer wants to get out of the dating app and move on to other less-reviewed texts on WhatsApp, but the bot doesn’t seem to have a definite answer.
“I know that’s a point!” I did not know what to write.
Sounds like the kind of thing I would try to do myself, and I’m not a bot (I swear). The scammers try to get as many details from you as possible, starting with the big ones and working their way up to the details as you trust them. But even the most advanced AI learning-based chatbot that derives its speech algorithmically from text found on the internet, can not talk about what is not available. In this case, it seems that the bot has no knowledge of a real place, or a problem prevents him from spelling the name of the place he has in mind. Anyway, good luck getting credit card information from Heather who lives in “____”.
“Yes, I live in Niceville, Florida.”
Score one for this AI bot. Niceville, Florida is indeed a real place. Unfortunately for the supposed scammer, AI isn’t too impressed with your early bird work. The bot also seems to appreciate the weather as it is, because not only is it 10:49 a.m. “here” in Niceville, but it’s 3:49 p.m. “there” in Niceville. Hell, I don’t blame you for scraping a sci-fi story idea based solely on that conversation. Well, that’s until the scammer tries desperately to find the bot on Facebook, but can’t find a way to get the bot to give up his last name. Take this as a lesson for all lovers of this kind of applications. Never give out your last name until you can verify the other person and until you are sure they won’t come back to hurt you.
“I’m visiting family.”
Honestly listen this conversation feels more like a learning experience. If you’re on a dating app and you start feeling freaked out by another person you’ve met online, just start retyping “I’m visiting family” over and over again until the another person understands the essentials that do not interest you.
“I think we’re both chatting with bots.”
No, I don’t think gaslighting is funny per se. But also, watch a bot do it to a fraudulent account obsessed with getting the bot’s number is damn hilarious. This particular bot seemed to be very aware of how to play with this humble crook. It was so effective in denying every supposedly human request to “prove [sic]” that I’m actually kind of convinced that AI is actually flesh and blood. The fact that the bot’s supposed phone number begins with “555-555” didn’t warn the scammer is another reason to believe the scammer may be more robotic than he thinks.
“No, I don’t have kids…I’m a stay-at-home mom…a dad.”
It looks like the kind of tough, strict, but fair parent I can support. I would love to meet such a strong and flowing parent like this, who not only takes care of his finances, but always finds the time to eat healthy and exercise. They have a wife and two kids, and they’re lucky to have their little package behind them all the way. Too bad the alleged scammer is too stuck in his old way of living with the times and coming out of his shell.
“My email address is Lisa[@]e-mail.com.”
The bots seem programmed to all give the same fake emails and phone numbers, as evidenced by other videos posted by the developers of the application. The scammer also seemed to notice this when he asked “How many people use this number on this app”. If only he knew there weren’t really “people” using that number. Next, the bot delivers the killing blow:
“This is not a conversation with someone you love,” the bot wrote. “It’s a conversation with someone trying to rip you off.”
And then immediately backtracks, just to keep the crook on his toes.
“What? No. I’m not a crook from Africa!