Should services charge “super users”?

Om Malik says that Twitter should charge super users like me and come up with a business model.

Dare Obasanjo, in a separate, but similar post comes to the conclusion that Twitter’s problems are due to super users like me.

Interesting that both of these guys are wrong.

First of all, Twitter doesn’t store my Tweets 25,000 times. It stores them once and then it remixes them. This is like saying that Exchange stores each email once for each user. That’s totally not true and shows a lack of understanding how these things work internally.

Second of all, why can FriendFeed keep up with the ever increasing load? I have 10,945 friends on FriendFeed (all added in the past three months, which is MUCH faster growth than Twitter had) and it’s staying up just fine.

But to the point, why not charge super users? I’d pay. But, if Dare and Om are right, there’s no way that I’d support the service enough to pay for my real cost on the service.

Either way, Twitter’s woes were happening long before my account got super huge. Remember SXSW last year? I only had 500 followers and Leo Laporte had something like 800. The service still went down. If this were a straight “n-scale” problem the crashing problems wouldn’t have shown up so early.

Why not just limit account size, like Facebook did? Well, that’s one way to deal with the problem, but if you look at my usage of Facebook it’s gone down to only a few minutes every month. I don’t even answer messages there anymore. Why? Cause I get frustrated at getting messages from people who wonder why I won’t accept them as a friend. It’s no business “utility” if I can’t make infinitely large friend lists and use those lists in the same way I use email (which Facebook also bans).

So, what do I do? I get excited by FriendFeed which lets 11,000 people interact with me in a public way. I have a feeling that that rapid growth will continue unabated and so far Friendfeed has stayed “Google fast.”

Nice try, though.

139 thoughts on “Should services charge “super users”?

  1. Charging is silly.

    - Money won’t help twitter right now.
    - Charging won’t deter “superusers”.

    They shouldn’t charge, they should ban.

  2. I don’t know if twitter are using a sharded database yet, at 350,000 users they still only had one database and read slave:
    http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster
    Dare’s post would make sense if they have now moved to a sharded structure but my best guess is that they haven’t had a chance to do that yet.

    It seems there will be duplication at least in the caching layer (memcached),
    everytime Scoble sends a message 25,000 per user caches get invalidated and will need repopulating by new SQL queries.

    Twitter are looking to get rid of the “with others” tab from a user to avoid at least some of this very type of problem, see here:
    http://groups.google.com/group/twitter-development-talk/browse_thread/thread/89a7292e5a9eee6d

    I think charging heavy users is the wrong model.

  3. I don’t know if twitter are using a sharded database yet, at 350,000 users they still only had one database and read slave:
    http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster
    Dare’s post would make sense if they have now moved to a sharded structure but my best guess is that they haven’t had a chance to do that yet.

    It seems there will be duplication at least in the caching layer (memcached),
    everytime Scoble sends a message 25,000 per user caches get invalidated and will need repopulating by new SQL queries.

    Twitter are looking to get rid of the “with others” tab from a user to avoid at least some of this very type of problem, see here:
    http://groups.google.com/group/twitter-development-talk/browse_thread/thread/89a7292e5a9eee6d

    I think charging heavy users is the wrong model.

  4. Robert (Scoble, not the other one) -
    Twitter does store multiple copies of each message, they’ve said so repeatedly in various presentations.

  5. Robert (Scoble, not the other one) -
    Twitter does store multiple copies of each message, they’ve said so repeatedly in various presentations.

  6. Robert: Um, I don’t think you understand what Dare was saying. You might wanna calm down a touch. It might be unfair to blame *you* for Twitter’s woes, but Dare’s analysis of the architecture is probably pretty accurate.

    Open up Twitter… now, did you wait several minutes for your page to appear? If not, then something’s being cached on the server side. It could be via memcached, it could be via “baking” your page instead of “frying” it, or whatever. But the data isn’t being collected on the fly as you seem to believe. It’s being pushed into the cache when you’re not around to ensure UI response times remain tolerable.

    Dare’s point was that Twitter was built as a micro-blogging system, and that’s how blogging systems work. You cache the hell outta everything, and you make a choice… make some users wait for extended page renders, or burn cycles in the background to ensure that everyone gets equal treatment.

  7. Robert: Um, I don’t think you understand what Dare was saying. You might wanna calm down a touch. It might be unfair to blame *you* for Twitter’s woes, but Dare’s analysis of the architecture is probably pretty accurate.

    Open up Twitter… now, did you wait several minutes for your page to appear? If not, then something’s being cached on the server side. It could be via memcached, it could be via “baking” your page instead of “frying” it, or whatever. But the data isn’t being collected on the fly as you seem to believe. It’s being pushed into the cache when you’re not around to ensure UI response times remain tolerable.

    Dare’s point was that Twitter was built as a micro-blogging system, and that’s how blogging systems work. You cache the hell outta everything, and you make a choice… make some users wait for extended page renders, or burn cycles in the background to ensure that everyone gets equal treatment.

  8. “First of all, Twitter doesn’t store my Tweets 25,000 times. It stores them once and then it remixes them. This is like saying that Exchange stores each email once for each user. That’s totally not true and shows a lack of understanding how these things work internally.”

    Robert, as was already pointed out this was once true for Exchange, but regardless I fail to see how you can make this same assumption for Twitter.

    Regardless of how many times it’s stored, Twitter also has a tougher routing problem. With Exchange, the sender defines where the message will be received. Twitter is fundamentally different – the sender broadcasts the message, and then the system needs to figure out where to deliver it. This means some of your 25,000 followers – remember, it still has to figure out if I will receive the message based on whether it’s an @reply and what my settings are.

    Twitter also has to deliver it to the countless number of tracks. Let’s assume that the average word length for English is 5.10 (http://blogamundo.net/lab/wordlengths/). On twitter, it’s likely less given the 140 char limitation, we tend to use more abbreviations and generally shorter words. Taking out, let’s say, 30 chars for punctuation – that means there are 20 distinct words. Twitter in turn needs to figure out who is tracking what, and the track functionality supports tracking word1+word2+word3. Obviously there are a number of ways to implement this more efficiently, but in effect Twitter has to do a fair amount of processing to see if a given message should be delivered to a given person’s track queue.

    It’s clear that they have a bottleneck somewhere. Given the roots of the service, it’s pretty clear the architecture didn’t plan for this kind of use – and they admitted it in the link Dario posted. None of us really know what’s going on behind the scenes, but based on what little evidence we have Dare’s scenario seems plausible and perhaps likely.

    Ignoring some of the differences in how the service is used, the other thing that FriendFeed had was the luxury of architecting their system after they saw how Twitter was being used. Twitter likely would have done things differently with the benefit of hindsight, but it sounds like (from interviews with Blaine) that much of their time was spent fighting fires as opposed to re-engineering the system.

  9. “First of all, Twitter doesn’t store my Tweets 25,000 times. It stores them once and then it remixes them. This is like saying that Exchange stores each email once for each user. That’s totally not true and shows a lack of understanding how these things work internally.”

    Robert, as was already pointed out this was once true for Exchange, but regardless I fail to see how you can make this same assumption for Twitter.

    Regardless of how many times it’s stored, Twitter also has a tougher routing problem. With Exchange, the sender defines where the message will be received. Twitter is fundamentally different – the sender broadcasts the message, and then the system needs to figure out where to deliver it. This means some of your 25,000 followers – remember, it still has to figure out if I will receive the message based on whether it’s an @reply and what my settings are.

    Twitter also has to deliver it to the countless number of tracks. Let’s assume that the average word length for English is 5.10 (http://blogamundo.net/lab/wordlengths/). On twitter, it’s likely less given the 140 char limitation, we tend to use more abbreviations and generally shorter words. Taking out, let’s say, 30 chars for punctuation – that means there are 20 distinct words. Twitter in turn needs to figure out who is tracking what, and the track functionality supports tracking word1+word2+word3. Obviously there are a number of ways to implement this more efficiently, but in effect Twitter has to do a fair amount of processing to see if a given message should be delivered to a given person’s track queue.

    It’s clear that they have a bottleneck somewhere. Given the roots of the service, it’s pretty clear the architecture didn’t plan for this kind of use – and they admitted it in the link Dario posted. None of us really know what’s going on behind the scenes, but based on what little evidence we have Dare’s scenario seems plausible and perhaps likely.

    Ignoring some of the differences in how the service is used, the other thing that FriendFeed had was the luxury of architecting their system after they saw how Twitter was being used. Twitter likely would have done things differently with the benefit of hindsight, but it sounds like (from interviews with Blaine) that much of their time was spent fighting fires as opposed to re-engineering the system.

  10. There are two basic ways to build a Twitter-like solution. Either you have, (A) per tweet, a single write and, per user, huge joined reads; or, (B) per tweet, huge numbers of writes and, per user, a single cheap read.

    With Twitter, reading generally happens more often than writing, especially when you have desktop clients built around polling. That implies going with solution (B), which has some big problems – most databases aren’t set up to deal efficiently with lots of writes.

    So, you can try to work it with solution (A), but then you need lots of muscle for all these joined queries. If you’re using database sharding, you’ll probably need to issue queries to multiple databases running on multiple machines, and join all the results and sort them by time, per each user page refresh or desktop client poll. That’s a lot of work per user.

    It sounds pretty expensive – better cache it. Leads to a hybrid solution; single write, rare combination reads but not too often (i.e. not every poll or page refresh). Some risk of stale updates.

    No matter which way you look at it, though, the scaling isn’t quite linear, as some of the old folks will follow new folks as they get added. It should ultimately end up as linear, though with a high constant factor, that constant determined by the average “noise threshold” per user.

    Looking at the pure “unit of work”, lots of writes probably beats lots of reads, because the reading solution requires sorting and, with the addition of caching layers, has cache coherency problems. Writing can be based around appending to queues.

    Also, all the “extra” features that Twitter-folks (in their blogs at least) seem to think are so essential, are quite costly to implement.

  11. There are two basic ways to build a Twitter-like solution. Either you have, (A) per tweet, a single write and, per user, huge joined reads; or, (B) per tweet, huge numbers of writes and, per user, a single cheap read.

    With Twitter, reading generally happens more often than writing, especially when you have desktop clients built around polling. That implies going with solution (B), which has some big problems – most databases aren’t set up to deal efficiently with lots of writes.

    So, you can try to work it with solution (A), but then you need lots of muscle for all these joined queries. If you’re using database sharding, you’ll probably need to issue queries to multiple databases running on multiple machines, and join all the results and sort them by time, per each user page refresh or desktop client poll. That’s a lot of work per user.

    It sounds pretty expensive – better cache it. Leads to a hybrid solution; single write, rare combination reads but not too often (i.e. not every poll or page refresh). Some risk of stale updates.

    No matter which way you look at it, though, the scaling isn’t quite linear, as some of the old folks will follow new folks as they get added. It should ultimately end up as linear, though with a high constant factor, that constant determined by the average “noise threshold” per user.

    Looking at the pure “unit of work”, lots of writes probably beats lots of reads, because the reading solution requires sorting and, with the addition of caching layers, has cache coherency problems. Writing can be based around appending to queues.

    Also, all the “extra” features that Twitter-folks (in their blogs at least) seem to think are so essential, are quite costly to implement.

  12. Hi Robert. Do u remember that Twitter was born for other goal? Do u remember the name of Twttr? It was “only” a send SMS group apps, in origin. The team is the same of Odeo. True?

    So, Twitter is a sort of messagging system such as IM but in a public way (but you can also set a protected status, why are u frustated?) and as the team write the system that “Twitter was not architected as a messaging system”:

    http://dev.twitter.com/2008/05/twittering-about-architecture.html

  13. Hi Robert. Do u remember that Twitter was born for other goal? Do u remember the name of Twttr? It was “only” a send SMS group apps, in origin. The team is the same of Odeo. True?

    So, Twitter is a sort of messagging system such as IM but in a public way (but you can also set a protected status, why are u frustated?) and as the team write the system that “Twitter was not architected as a messaging system”:

    http://dev.twitter.com/2008/05/twittering-about-architecture.html

  14. Geoff: Google have bought jaiku and so are unlikely to buy Twitter. :-)

    I wonder how much the outages are driving people into Pownce and Jaiku. I know of at least one of my ‘Twitter friends’ who is going *back* to Jaiku because of the service problems.

  15. Geoff: Google have bought jaiku and so are unlikely to buy Twitter. :-)

    I wonder how much the outages are driving people into Pownce and Jaiku. I know of at least one of my ‘Twitter friends’ who is going *back* to Jaiku because of the service problems.

  16. No, services shouldn’t charge “super users.” (I’d be surprised if “super users” don’t start receiving significant sponsorships to come and use a service).

    As far as the workflow for Twitter vs. the workflow of FriendFeed, it’s impossibly unfair to compare Twitter to FriendFeed (yet). Twitter is pushing updates the moment you send an update. FriendFeed isn’t doing instant updates via XMPP (Jabber) or SMS.

    Additionally, Twitter is at the “oh wow, if I follow 10,000 people I’ll probably have 1,000 follow me back and I can spam them.” This is making a large number of “super users”, not just you Robert :-) They’re getting hammered in traffic compared to FriendFeed.

    Let’s compare the numbers in terms of service reliability and overall load (rounded down)… You’ve got 10,000 followers on FriendFeed and 20,000 on Twitter. If this is a true representation of the population on each service (it’s not, but we’ll pretend), this means Twitter has double the traffic of users. Double the traffic, in a push based service, does not mean double the load… There are double the updates to double the followers.

    A semi-decent formula for load based on the above:
    Twitter != FriendFeed x 2
    Twitter = FriendFeed ^ 2

  17. No, services shouldn’t charge “super users.” (I’d be surprised if “super users” don’t start receiving significant sponsorships to come and use a service).

    As far as the workflow for Twitter vs. the workflow of FriendFeed, it’s impossibly unfair to compare Twitter to FriendFeed (yet). Twitter is pushing updates the moment you send an update. FriendFeed isn’t doing instant updates via XMPP (Jabber) or SMS.

    Additionally, Twitter is at the “oh wow, if I follow 10,000 people I’ll probably have 1,000 follow me back and I can spam them.” This is making a large number of “super users”, not just you Robert :-) They’re getting hammered in traffic compared to FriendFeed.

    Let’s compare the numbers in terms of service reliability and overall load (rounded down)… You’ve got 10,000 followers on FriendFeed and 20,000 on Twitter. If this is a true representation of the population on each service (it’s not, but we’ll pretend), this means Twitter has double the traffic of users. Double the traffic, in a push based service, does not mean double the load… There are double the updates to double the followers.

    A semi-decent formula for load based on the above:
    Twitter != FriendFeed x 2
    Twitter = FriendFeed ^ 2

  18. As far as I’m aware Twitter is the only service that allows posting and receiving by SMS. The big problem with SMS is it is a untimed service, when I text there is no guarantee when and if it will be delivered this must be a problem for them.

    Robert if you remember in the bad old days :-) when Blogger was crashing all the time they offered a Pro service where you paid in the hope of some reliability – fortunately Google took them out and over a period of a year or two sorted out the problems. I hope that Google do the same with Twitter :-)

  19. As far as I’m aware Twitter is the only service that allows posting and receiving by SMS. The big problem with SMS is it is a untimed service, when I text there is no guarantee when and if it will be delivered this must be a problem for them.

    Robert if you remember in the bad old days :-) when Blogger was crashing all the time they offered a Pro service where you paid in the hope of some reliability – fortunately Google took them out and over a period of a year or two sorted out the problems. I hope that Google do the same with Twitter :-)

  20. >I’ll grant it doesn’t do it now. But it sure as hell used to.

    I know it did. Which is why some people still don’t understand the architecture that Exchange uses (which is why I was “educated” on the issue).

    By the way, this caused a famous and massive problem inside Microsoft when the database server filled up when someone accidentally emailed something to “all.” Email went down for two days, the way I heard it.

  21. >I’ll grant it doesn’t do it now. But it sure as hell used to.

    I know it did. Which is why some people still don’t understand the architecture that Exchange uses (which is why I was “educated” on the issue).

    By the way, this caused a famous and massive problem inside Microsoft when the database server filled up when someone accidentally emailed something to “all.” Email went down for two days, the way I heard it.

  22. “This is like saying that Exchange stores each email once for each user. That’s totally not true”

    Sweet how you never had to work with an Exchange server which did exactly that, and then added ‘All’ as a recipient to the address book of every user.

    I’ll grant it doesn’t do it now. But it sure as hell used to.

  23. “This is like saying that Exchange stores each email once for each user. That’s totally not true”

    Sweet how you never had to work with an Exchange server which did exactly that, and then added ‘All’ as a recipient to the address book of every user.

    I’ll grant it doesn’t do it now. But it sure as hell used to.

  24. Michael Foord: nah, the problems would become much much worse as they scaled up. The architecture they chose isn’t too far off. It’s just that they never did engineer it properly. The fact that just this week they’ve gotten the ability to turn off features one by one shows me that they never were run professionally until recently. I bet that Twitter starts getting stable very quickly now. Remember, there’s only a million or two on Twitter. Facebook keeps up with 80 million. Hotmail 200 million every 30 days. Facebook and Hotmail don’t go down, even though they are doing stuff more complex and at a larger scale than Twitter is.

  25. Michael Foord: nah, the problems would become much much worse as they scaled up. The architecture they chose isn’t too far off. It’s just that they never did engineer it properly. The fact that just this week they’ve gotten the ability to turn off features one by one shows me that they never were run professionally until recently. I bet that Twitter starts getting stable very quickly now. Remember, there’s only a million or two on Twitter. Facebook keeps up with 80 million. Hotmail 200 million every 30 days. Facebook and Hotmail don’t go down, even though they are doing stuff more complex and at a larger scale than Twitter is.

  26. So it is no secret to say that Twitter wasn’t created with scalability in mind – like 90% of all “2.0″ projects. After all, Twitter was born and it stayed completely in the dark for over 8 months until it exploded at the SXSW’07. I don’t think it went down during those first 8 months (and if it did, not many people noticed anyway).

    And ever since the first time it went down, chances are they’ve been patching and optimizing things here and there, when perhaps what Twitter needs is a complete remake – which shouldn’t really be THAT hard considering Twitter is above all, a very simple application – that thing doesn’t put a spacecraft in Mars – so the main focus should be scalability. Perhaps they’re doing that already. If not, they should.

    On the other hand, FF most likely has been created with scalability in mind, and so far, other than throwing hardware at it, as long as they’re somewhat ahead of the growth game, it doesn’t need anything to stay afloat as it grows. It’s not rocket science either – they simply didn’t (supposedly) ignore the possibility of growth when they started to write their software. Which is what everyone should do when starting a project, and there’s plenty of documentation out there and plenty of great engineers who know how to architect a simple (or complex) app so that it will scale if necessary.

    Leaving that aside, the business model is a very interesting and fair question. No, I don’t agree with Om. Not because I don’t think super-users shouldn’t be charged, but because charging super-users doesn’t fix anything, scalability-wise. I also don’t think Om understands how Twitter works internally. Ok, *I* don’t know how Twitter works, but if it does the way Om describes it, then the folks at Twitter absolutely definitely need to rewrite the whole thing from scratch. Personally I didn’t like neither Obasanjo’s nor Om’s articles at all. You? Well, you’re talking about Twitter and FriendFeed, and a bit of Facebook. Thank god for that “This is why I love the tech industry” article, because it is for posts like that I’m still reading you. (No offense, I just don’t use neither Tw nor FF, so this fun madness you guys have is completely out of my radar…)

  27. So it is no secret to say that Twitter wasn’t created with scalability in mind – like 90% of all “2.0″ projects. After all, Twitter was born and it stayed completely in the dark for over 8 months until it exploded at the SXSW’07. I don’t think it went down during those first 8 months (and if it did, not many people noticed anyway).

    And ever since the first time it went down, chances are they’ve been patching and optimizing things here and there, when perhaps what Twitter needs is a complete remake – which shouldn’t really be THAT hard considering Twitter is above all, a very simple application – that thing doesn’t put a spacecraft in Mars – so the main focus should be scalability. Perhaps they’re doing that already. If not, they should.

    On the other hand, FF most likely has been created with scalability in mind, and so far, other than throwing hardware at it, as long as they’re somewhat ahead of the growth game, it doesn’t need anything to stay afloat as it grows. It’s not rocket science either – they simply didn’t (supposedly) ignore the possibility of growth when they started to write their software. Which is what everyone should do when starting a project, and there’s plenty of documentation out there and plenty of great engineers who know how to architect a simple (or complex) app so that it will scale if necessary.

    Leaving that aside, the business model is a very interesting and fair question. No, I don’t agree with Om. Not because I don’t think super-users shouldn’t be charged, but because charging super-users doesn’t fix anything, scalability-wise. I also don’t think Om understands how Twitter works internally. Ok, *I* don’t know how Twitter works, but if it does the way Om describes it, then the folks at Twitter absolutely definitely need to rewrite the whole thing from scratch. Personally I didn’t like neither Obasanjo’s nor Om’s articles at all. You? Well, you’re talking about Twitter and FriendFeed, and a bit of Facebook. Thank god for that “This is why I love the tech industry” article, because it is for posts like that I’m still reading you. (No offense, I just don’t use neither Tw nor FF, so this fun madness you guys have is completely out of my radar…)

  28. “If this were a straight “n-scale” problem the crashing problems wouldn’t have shown up so early.”

    Why not? As they scale up their system – the number of users is growing just as fast. If they scale just quick enough to stay one step behind the problem they will continue to have issues.

    I don’t blame them – it’s a difficult problem and not many sites have to cope with such massive growth so quickly.

  29. “If this were a straight “n-scale” problem the crashing problems wouldn’t have shown up so early.”

    Why not? As they scale up their system – the number of users is growing just as fast. If they scale just quick enough to stay one step behind the problem they will continue to have issues.

    I don’t blame them – it’s a difficult problem and not many sites have to cope with such massive growth so quickly.

  30. >>Once you post a Message it gets copied to the streams of all your followers.

    Absolutely wrong.

    Only gets copied if a user instantiates his object and asks for those things. Even then, it’s not “copied” except to display it, and that copy is temporary and stored in your browser, or in your Google Talk account.

  31. >>Once you post a Message it gets copied to the streams of all your followers.

    Absolutely wrong.

    Only gets copied if a user instantiates his object and asks for those things. Even then, it’s not “copied” except to display it, and that copy is temporary and stored in your browser, or in your Google Talk account.

  32. Translation: the only scaling problem would be when I started up my Twitter and wanted to see all objects from everyone. Then my object would have to work harder than, say, your object because your object would only have to find a few Tweets. Mine has to find 23,000. OK, so they have to throw a little extra processor at my account, but only when I’m using the system. If, like right now, I’m not using the system it has absolutely no extra load on the system unless someone calls my object and makes it do work.

    How do I know this? Ask the Exchange team how it keeps stuff from duplicating all over the place and causing server disks from filling up.

  33. Translation: the only scaling problem would be when I started up my Twitter and wanted to see all objects from everyone. Then my object would have to work harder than, say, your object because your object would only have to find a few Tweets. Mine has to find 23,000. OK, so they have to throw a little extra processor at my account, but only when I’m using the system. If, like right now, I’m not using the system it has absolutely no extra load on the system unless someone calls my object and makes it do work.

    How do I know this? Ask the Exchange team how it keeps stuff from duplicating all over the place and causing server disks from filling up.

  34. Robert, Duncan Riley referred me to a plugin for wordpress blogs that automatically adds friendfeed comments to the originating post on your blog + lets people comment on friendfeed items from the blog as well. You can see it in action on his Inquisitr.com site. Here’s the link to that plugin if you’re interested: http://tinyurl.com/2uqa6l

  35. Tiago: FriendFeed certainly notifies people. It even has an API where you can get messages sent into.

    As to architecture. OK, let’s have one object:

    Scoble’s Tweets.

    Then let’s have another object.

    Jane Smith’s Tweets.

    Now let’s have a third object:

    John Schmidt’s Tweet page that displays both Jane’s and Scoble’s Tweets.

    Sounds like Scoble’s and Jane’s Tweets are being copied, right?

    No.

    In fact, if John Schmidt never uses his account, nothing happens at all.

    But, let’s say that John Schmidt opened his Web browser and visited Twitter. Well, ONLY THEN does John Schmidt’s object (which knows which Tweets it should go look for) talk to the other two objects, and say “give me your Tweets.” Then John’s object mashes them together and displays them to John. It also, then, closes down and releases all memory and disk space until the next time John asks for something.

    This does not change if there are a million “objects” being mashed up. No copies are living permanently. Just the original objects.

    Got it yet? I’ll do a video, if you want to understand it more.

  36. Could you elaborate on “remixes them”? Because so far Dare Obasanjos thoughts sounds much more plausible.

  37. Robert, Duncan Riley referred me to a plugin for wordpress blogs that automatically adds friendfeed comments to the originating post on your blog + lets people comment on friendfeed items from the blog as well. You can see it in action on his Inquisitr.com site. Here’s the link to that plugin if you’re interested: http://tinyurl.com/2uqa6l

  38. Tiago: FriendFeed certainly notifies people. It even has an API where you can get messages sent into.

    As to architecture. OK, let’s have one object:

    Scoble’s Tweets.

    Then let’s have another object.

    Jane Smith’s Tweets.

    Now let’s have a third object:

    John Schmidt’s Tweet page that displays both Jane’s and Scoble’s Tweets.

    Sounds like Scoble’s and Jane’s Tweets are being copied, right?

    No.

    In fact, if John Schmidt never uses his account, nothing happens at all.

    But, let’s say that John Schmidt opened his Web browser and visited Twitter. Well, ONLY THEN does John Schmidt’s object (which knows which Tweets it should go look for) talk to the other two objects, and say “give me your Tweets.” Then John’s object mashes them together and displays them to John. It also, then, closes down and releases all memory and disk space until the next time John asks for something.

    This does not change if there are a million “objects” being mashed up. No copies are living permanently. Just the original objects.

    Got it yet? I’ll do a video, if you want to understand it more.

  39. Could you elaborate on “remixes them”? Because so far Dare Obasanjos thoughts sounds much more plausible.

  40. If you’re awake for 12 hours a day you have 3.9 seconds to ‘interact’ with each of your ‘friends’.

  41. As far as I know this is exactly how Twitter does it. Once you post a Message it gets copied to the streams of all your followers. The problem is that building up the last messages of the people you follow based on their user_id is just not working fast enough. Having a copy of your message is easier and faster to load. so this is exactly how it works, but I am not sure why this means that you need to start paying :)

  42. As far as I know this is exactly how Twitter does it. Once you post a Message it gets copied to the streams of all your followers. The problem is that building up the last messages of the people you follow based on their user_id is just not working fast enough. Having a copy of your message is easier and faster to load. so this is exactly how it works, but I am not sure why this means that you need to start paying :)

  43. Actually Dare’s post sums it up. It doesn’t mean that Twitter’s architecture is the one he suggests but given their database problems it’s very likely.

    Everytime you update, Twitter has to get a list of your 25k followers, sort out any @ replies, find out what their notification settings are, notify each and everyone individually and add a message to their feed (even if it’s still the same one). All this while their feeds are being hit like crazy by desktop clients.

    So, Twitter is a notification system with multiple entries and exit points. Friendfeed is an aggregator. It doesn’t, as far as I know, notify anyone.

  44. Actually Dare’s post sums it up. It doesn’t mean that Twitter’s architecture is the one he suggests but given their database problems it’s very likely.

    Everytime you update, Twitter has to get a list of your 25k followers, sort out any @ replies, find out what their notification settings are, notify each and everyone individually and add a message to their feed (even if it’s still the same one). All this while their feeds are being hit like crazy by desktop clients.

    So, Twitter is a notification system with multiple entries and exit points. Friendfeed is an aggregator. It doesn’t, as far as I know, notify anyone.

Comments are closed.