In 🔗 Part 1, we dug into the history of payment card fraud, right back to when credit card payments were done by hand in one of those zip-zap machines that actually imprinted the card’s data onto two carbon-paper payment slips, one for the customer and the other for the merchant.
The slips were supposed to be signed in front of the merchant, who was supposed to check against the signature on the back of the card, creating a very crude form of multi-factor authentication (MFA) in which the card was “something you had,” and how to do the signature was “something you knew.”
But this was a largely worthless as a security precaution, given that few merchants even bothered to check at all, and that those who did were faced with “matching” a signature scrawled on the slip against a tiny, often blurry and worn-away reference version squeezed onto the thin strip on the back of the card.
Merchants could call a bank phone number and read out the credit card details to request authorization for larger amounts, but the time required made this impractical at busy sales points such as retail stores and restaurants, so many transactions were simply “taken on faith” by the seller.
By the 1980s, electronic credit card machines appeared that could read the card data off the magnetic stripe on the back, and then use a built-in modem to call the bank for automatic authorization.
Card-swipe machines reduced the ease with which fraudsters could present stolen or canceled cards, but didn’t improve on the second factor of authentication used to verify the buyer, which was still just a basic physical signature check.
Swipe machines also failed to solve the problem of card cloning, where fraudsters took identifying data from a legitimate card, namely the number, expiry date, and cardholder’s name, and encoded that onto a blank replica card to turn it into a copy of the genuine one.
The data needed to clone a card was captured on every carbon-copy paper form by imprinting it from the card, so it was at risk of being copied and abused or sold on by unscrupulous staff or retailers, which was bad enough.
But the very same data was also present on the card’s magstripe, and criminals quickly figured out how to modify existing digital card machines to turn them into devices known as skimmers that would read the card twice in one swipe.
The machine itself would read off the data as usual, thus working as normal and arousing no obvious suspicion.
But a second, hidden magnetic read head (similar technology to the heads used in old cassette players) would make a simultaneous second copy of the data and store it away on a hidden memory card for later, or transmit it over Wi-Fi or Bluetooth to a listening device in the area.
Now, even the customers of honest merchants with honest staff could be defrauded by means of a tampered device plugged in at a point of sale while no one was watching.
The next cybersecurity step was the inclusion of a secure chip on each card, using the same technology as the SIM cards used in mobile phones.
These cards are programmed with unique cryptographic keys by the card issuer, and can then carry out one-time cryptographic calculations, using those keys as part of the authentication process for each transaction.
But the stored keys themselves can’t easily be read out at all, let alone modified in order to create a clone of someone else’s card.
In theory, chip-enabled cards are tamper-resistant to the point that trying to access any secret data inside the chip will ruin the card and stop it working permanently.
The chip therefore can’t reliably be copied to steal its data, or reprogrammed to masquerade as someone else’s card, thus clamping down on “cloned” cards that are working replicas of the real thing.
Like SIM cards, the chips in payment cards can also be protected with a PIN code from four to eight digits long that is not stored on the card, and isn’t even known to the issuer.
This PIN needs to be sent to the chip before it will respond to cryptographic requests such as “please authorize this transaction.”
After three wrong tries, the chip itself locks up and can’t be reactivated, thus making it unlikely that a criminal will guess the right PIN while trying to use a stolen card.
Of course, stolen cards can still have their basic data stolen, either read off (or scanned in with a camera) from the front of the card, leeched from the magstripe, or captured via a hacked website when the card is used online.
A modest extra precaution to prevent skimmed card data from being directly usable online is a short, secret code (usually three or four digits printed on the back of the card) that is stored neither on the magstripe nor in the chip, known as a the CVV, or card verification value.
Online transactions typically require the purchaser to provide this secret code, on the assumption that a criminal who has skimmed the card won’t have acquired the CVV, and therefore won’t know what to type in.
But we all know how criminals work around that precaution – by using phishing attacks, where they lure victims to fake websites.
Those fake websites pretend to process a real transaction, thereby tricking the victim into typing in not only their name, card number and expiry date…
…but also their CVV code.
As we wrote in Part 1, online card fraud conducted by phishing criminals putting through bogus online transactions remain a serious problem.
But fraudulent in-person transactions, where both the purchaser and the card are physically present, have indeed become more difficult thanks to the security precautions described above.
This is good news, because card-present fraud still represents a clear and high-value danger.
Indeed, some criminal gangs still go in for card-present scams, where they (or an underling, who may be given little choice but to take on the risk of being caught red-handed) actually show up at high-end retail stores in the hope of buying and taking away expensive items such as jewelry, precious metals, or latest-model laptops and phones.
This way, they take immediate possession of the stolen property, and avoid the need to provide a delivery address that could be staked out or raided.
As an extra precaution against this sort of fraud, as well as for greater convenience while out and about, many users are going one step further than just relying on tamper-resistant payment cards with secure PINs.
They’re loading their card details into apps such as Apple Wallet and Google Wallet, and using Apple Pay and Google Pay at the point of sale, so that they don’t need to carry their cards with them at all.
As we explained last time, these phone-based payments are in some ways safer than using cards directly, given that:
But “card-present” fraud is back in the spotlight, even though the victims aren’t actually using or even carrying their cards with them because they have switched to paying via their phones instead.
Annoyingly, the card-replacing system used by apps such Apple Wallet and Google Wallet turns out to have an Achilles’ heel.
This makes it possible for well-organized criminals to make phone-based payments against your card, without ever touching or even seeing your card, without knowing the PIN of the card, and without ever hacking or cracking into your phone by bypassing its lock code.
Some of the tricks behind this Achilles’ heel were revealed in a recent paper at an M3AAWG conference in Lisbon given by threat researcher Ford Merrill and written up in some detail by well-known investigative cybersecurity journalist Brian Krebs.
The good news, if we can call it that, is that these phone-based payment crimes still largely depend on phishing to get started, so that being well-informed about, and resilient to, phishing scams is an excellent defense.
The bad news is that Krebs estimates that these new-style “card-present with no actual card present” crimes may have netted cybercriminals around the world as much as $15 billion in the past year.
Furthermore, the tricks used by the criminals are not yet widely known, which probably lures merchants into being more trusting than they should be when faced with suspicious-seeming purchases or purchasers that somehow seem to be “vouched for” by the involvement of Apple Pay or Google Pay in the process.
As Krebs points out, and as Merrill uncovered in his research, to kickstart this crime, the criminals need only to load your card data into their own mobile phone wallet app in the first place.
And for that, they don’t need to recover the PIN that would unlock the card itself.
Instead, they need just a single MFA code from your bank to authorize the initial “phone walletization” of your card data.
They can get all of that by phishing, as long as they can lure you to put your card details into a fake website, and then to augment that phished data with a current MFA code.
The playbook of the scammers investigated by Merrill usually involves tricking you into thinking you are authorizing the payment of an apparently legitimate expense, such as a home delivery charge or a missed payment for a toll road in your area.
As an example, if you have driven on the toll road in question recently, you might let your anti-phishing guard down by reasoning that even if the penalty charge is a mistake, you’re almost certain to get the payment refunded later once you find proof that your original payment already went through.
Your not-so-unreasonable conclusion might be that it’s worth taking the risk of paying unnecessarily just in case the toll charge is genuine, given that missing a penalty-payment deadline might land you in deeper trouble, such as getting a summons to appear in court.
But in these new-style payment wallet scams, there’s no missed home delivery, and there’s no toll-road penalty or parking fine.
Instead, the criminals will use the phished MFA code to authorize one of their mobile phone wallets to act as a payment authorizer against your card.
Sadly, even though this crime requires a lot of direct, hands-on human involvement and the availability of plenty of ready-to-use mobile phones, the 15 billion annual dollars’ worth of criminality estimated by Krebs makes it worthwhile for the criminals.
They don’t need to automate their operation entirely, but can instead adopt the sort of human-led attack process that serves ransomware “affiliates” so well:
Remember that the phones used by the syndicates don’t need to be the very latest models, but merely recent enough to support the latest wallet apps.
Even devices with little or no direct resale value of their own are fine for this crime.
Also, because the criminals are using phones that they set up themselves, they don’t need to do any lock code hacking or to exploit any security bypasses on those devices.
These “walletized” phones are turned into hard currency or physical items in several ways:
The last trick above may sound unnecessary, but it neatly sidesteps the problem of shipping phones pre-loaded with fraudulent wallets overseas, which takes time, costs money, and is liable to detection and interception.
Remember that part of the security of the system is that loading a card into a wallet on a phone is a one-time job that requires a one-time MFA code, which the criminals must utilize within a few minutes of the victim being phished.
Once the card is loaded into one phone’s wallet, it can’t be cloned or transferred to another device without getting hold of a new MFA code.
To process transactions overseas, the criminals therefore bring the transaction to the walletized phone, instead of sending the walletized phone to the country where they want to buy goods fraudulently.
The trick is subtle, and it sounds as though it ought to be cryptographically impossible.
But it relies on the fact that although the NFC reader at the merchant’s jewelry store or high-end phone shop knows it’s communicating with a phone that’s just a few centimeters away (the typical range of NFC transmissions), it can’t tell how that phone is communicating with the online payment system.
The payment terminal knows that it sent out an NFC request asking for a cryptographically strong authentication from a phone that was held up to it.
But the merchant’s device can’t tell whether the phone with which it exchanges NFC data and that supplied the final authorization code is the same one that negotiated with Apple Pay or Google Pay to acquire the relevant authorization.
Astonishingly, perhaps, the accomplice overseas doesn’t even need an official wallet app on their device.
All they need is an app that will negotiate with the merchant’s NFC reader to initiate the transaction, then use NFCGate-style transaction proxying to hand the request over to the phone on the other side of the world on which the relevant wallet is located.
When the authorization code comes back, the accomplice’s rogue NFCGate app can play it back to the merchant’s NFC payment terminal to complete the transaction.
This isn’t the sort of threat model that you might expect, because NFCGate transaction proxying doesn’t involve malicious outsiders cracking any cryptographically-secured communications.
NFCGate merely requires that the rogue in-store purchaser and the owner of the rogue walletized phone should agree in advance to co-operate on purpose.
End-to-end encryption doesn’t protect you if the people at each end are crooks who are working together to defraud you. In this scam, encrypting the NFCGate traffic would protect the crooks against both you and the merchant, by making it as good as impossible for a security scanner along the way to analyze the content of the unwanted network traffic to detect fraudulent activity.
Your first thought might be to protect the wallet app on your own phone even more strongly, or to remember to review the cards that are in your own wallet more regularly.
That’s good advice in its own right, but it won’t help directly in this case.
The attackers aren’t attacking your phone, or your wallet app; they’re simply setting up a second phone of their own that just happens to be authorized to request payment authorizations with your card data.
So:
Learn more about our mobile security solution that goes beyond traditional MDM (mobile device management) software, and offers active on-device protection that’s more like the EDR (endpoint detection and response) tools you are used to on laptops, desktops and servers:
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image of old cash register by Alvaro Reyes via Unsplash.
By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.