Brightball

Waste Spammers Time to Kill Their Return on Investment

Business | Security | Phishing | spam | - July 30, 2022 // Barry

Continuing our series from 2012 where I accidentally ended up combating phishing and fraud for a year, we move onto the spam issue. Everything that happened that year was an exercise in triage. Problems were everywhere on the system and in the marketplace. The site I was working on was the leader in a niche space but it wasn't just the phish who tried to capitalize on the chaos, it was our competitors too.

Spam takes a time investment and every time investment is a business decision. If you can't stop it completely, you can at least dramatically increase their costs...and have fun doing it.

In order to deploy DMARC safely, we cautiously rolled it out over the course of about 1.5 months. This site was almost tailor made to be heavily phished because all communication around transactions were handled directly over personal email. Anybody spending time on the site for a while could easily collect the email addresses of other users, which put them all in danger. It also kept us from having any visibility into what was going on in the system so that we could fix it.

We needed to move communications into the platform, so we built an internal message system. Now people could ask questions about listings, negotiate deals and discuss shipping all without having to give up their personal email addresses! And then the spam started over that system.

There's so much fraud that goes on in an online marketplace. People create fake listings to bait people into scam transactions. They'll buy merchandise, then use Western Union to scam the seller out of their payment and get a refund because they "accidentally over paid". Sometimes users will just be actively phished in other ways.

Longtime users of the site were furious about what was happening. Somebody created a "<company_name>sucksnow.com" site outlining how bad things were and it got a lot of traction. Our competitors and users of their sites were actively using our own systems to solicit our users. They even offered free listings to try to entice people during all this, which just led to the same type of fraud there too.

Probably the only thing that kept it from working was that every competitor was doing it at the same time. There was no clear, single migration path. No Facebook to our MySpace. No Reddit to our Digg. That and our network effect bought us time to fix it.

User Trust Scores

The very first thing we learned about this site was that we could not ask users to change their behavior in any way in order to protect them. While some users were clearly just sending spam, others were actually vendors on the platform sending legitimate information out to their previous customers. We couldn't have a single anti-spam policy that applied to everybody or we would risk disrupting legitimate activity.

We had to know who we could trust and for that we needed a formula. On a scale of 0-100, how much do we trust this user account? By reducing it down to a single number, we could easily adjust the strictness of different policies based on this number.

So we started out by factoring in all sorts of things that we wanted to use...

  • Verified email address? (required anyway)
  • How long has the account been active?
  • How many transactions have they participated in, successfully?
    • Net positive feedback from those transactions?
  • Used a credit card successfully?
  • Verified a phone number?
    • Voice or Text?
    • Landline, Mobile, VoIP?
  • Geographic proximity of area code, card zip code, listed shipping address and IP address close?
  • Previously flagged for spam?
  • Paid for advertising on the site?
  • Active and verified vendor account?

Again, this was a triage situation so we didn't get to do everything we wanted to. It's been 10 years, but I want to say we used vendors, advertising, successful use of a credit card and transaction history to start begin. Don't worry, we'll use the other tools in future articles.

Transaction history and account age would continually improve your score. From here, we were able to tier some of our enforcement when the time came.

Identifying Spam

For our purposes, the spam we were most focused on were messages that were copy and pasted to user after user. We could clearly see this pattern in the message system, but they were never exactly the same. You couldn't just run a database query to find messages that matched perfectly, it had to be similar to a certain degree.

Luckily, there's an algorithm designed exactly for this purpose: Levenshtein distance. If two strings of text are compared, the algorithm will provide the number of characters that need to change to make the first string match the second. This number is the "distance" between the two.

From the Wikipedia example, the distance between "kitten" and "sitting" is 3.

  • kitten → sitten (substitution of "s" for "k")
  • sitten → sittin (substitution of "i" for "e")
  • sittin → sitting (insertion of "g" at the end)

If two strings are equal, the distance is zero. The distance cannot be greater than the length of the longer string.

By doing some simple math, we get a similarity score.

(1 - distance/max_length) * 100 = Similarity Percentage

Sticking to our earlier example...

distance = levenshtein_distance("kitten","sitting") # 3
max_length = ["kitten".length, "sitting".length].max # 7

similarity = (1 - distance/max_length) * 100 # 57.14%

This approach scales smoothly to sentences and paragraphs too.

Now that we have our detection algorithm ready, we need to apply it. Anytime a user sends a message, we'll compare it to the most recent messages sent by that user. For sake of this example, we'll use a sample size of 10. So we will take the new message and calculate it's similarity to the 10 most recent messages that were sent.

This is where our trust scores start to come into play. Based on our trust in the user account, we can adjust a few factors that will trigger our flag.

  • Similarity threshold percentage (60%, 80%, 95%, etc)
  • Number of similar messages in the sample (3, 5, 8 of 10)

So for a brand new account that we don't trust at all, if they are sending messages that are 60% similar from 3 of the last 10 messages, we start flagging new messages for further inspection. For a long standing account that we trust, maybe we wait until the messages are 95% similar in 10 of 10 recent messages to trigger the flag.

Catch & Release Spam Detection

The big challenge with spam is that you need to detect it before it's delivered to avoid bothering your users. At the same time, you have to make sure you aren't being so aggressive that legitimate messages are prevented.

Our internal message system didn't have a Spam folder that we could route things to (yet), so we opted for a "Catch and Release" strategy. Once a user was flagged for potentially sending spam based on our above formula, we would hold all further messages that they were trying to send until we could review all of their activity within a larger time period to make a final decision.

Every 8 hours we would review the user activity.

  • Did they continue sending more messages like this?
  • How bad did it get?
  • What was the total send rate for the last 8 hours?

Initially we reviewed all of these manually to verify what we thought should happen. Eventually we were able to trust the system.

  • If a user had only tried to send a few messages over that time period, it was a false alarm and we would flag them as safe before releasing the messages for delivery.
  • If a user kept on sending message after message, we would flag the similar messages as spam without delivering them.
  • If the spamming behavior was extremely high, we would initially ban the entire account.

This system was very effective. Complaints were significantly reduced and most messages that were caught in our filter moved on to their final destinations without issue.

The bans, however, were not effective.

Even Spammers Need a Good Return on Time

When we banned an account, it was pretty useless. The spammer would just create a new account, causing us to repeat the entire process from scratch.

We found them trying to copy entire sets of listing text into each message, just to get around our algorithms. That didn't help them because we would just lower the thresholds for new accounts, letting the catch and release process do its work.

No, we needed to find a way to fight back. We needed to waste their time as much as they'd been wasting ours.

First, we started adding a Captcha after a certain number of messages were sent in a time period. The number of messages and the time period were entirely based on our trust scores, so long time users of the site never saw these at all.

Next, we stopped banning the accounts once we caught them. We just flagged them internally as spammers and refused to deliver anything they were trying to send. We setup background job that would randomly ban these accounts within 1-2 weeks of our initial flag. This gave the appearance that nothing had changed and that we still "caught them" before they created a new account. I later learned that this was called Shadow Banning, but at the time we knew of a Photoshop forum that would do this to problem posters.

Using some form of fingerprinting that made sense for 2012, we would more easily be able to identify the new accounts that were created after a ban to immediately flag them before even one new message was sent. It turns out that invasive marketing techniques are actually really helpful for fraud prevention.

Lastly, we would allow the messages to be delivered to other accounts that had been flagged so that if they tried to test sending messages between their accounts to see if it worked...it would.

Oh my goodness did this work beautifully! We setup a dashboard in our office called "Time Wasted by These Poor...individuals" (use your imagination on the real name).

It showed us a chart of the verified and caught spammers, as well as how much time they were spending manually typing in messages while completing Captchas...that were never delivered. Our worst offender was doing this for 15 hours a day straight.

That put a smile on all of our faces and significantly improved office morale. It's also the moment that I started developing a real love for this stuff.