No RSS? Feed43 lets you make your own

This is gonna be controversial, that’s for sure. Your favorite site doesn’t provide news feeds? This service, Feed43, converts any Web page to an RSS feed on the fly. Subversive!

I wonder if it’ll make full text feeds out of pages that only provide partial text feeds? Hmmm.

Update: Another company, FeedYes, released a similar product today. They claim that FeedYes is easier to use and faster and has one-click adding to MyYahoo and MyMSN.

91 thoughts on “No RSS? Feed43 lets you make your own

  1. I use http://www.IrisFeed.com – it also allows you to create RSS feeds for a website that doesn’t have one.

    BUT it also allows you to create Atom and even iTunes Podcast XMl feeds – all for free with no banners and no registration :)

  2. I use http://www.IrisFeed.com – it also allows you to create RSS feeds for a website that doesn’t have one.

    BUT it also allows you to create Atom and even iTunes Podcast XMl feeds – all for free with no banners and no registration :)

  3. Isn’t the problem the total user experience rather than technology? The analogy to napster is perfect.

    Forget the technology for a moment. Napster got killed but ITunes flourished. The idea is not whether you can digitize music, the key question is whether you can bring value the creator of music. Napster did not bring value to the content creators but ITunes sure did.

    Feed43 is good technology. It brings content closer to the user. What it does not do is bring the content creator and the content consumer closer. Till it brings this value, its just reasonably good, hacky technology.

  4. Isn’t the problem the total user experience rather than technology? The analogy to napster is perfect.

    Forget the technology for a moment. Napster got killed but ITunes flourished. The idea is not whether you can digitize music, the key question is whether you can bring value the creator of music. Napster did not bring value to the content creators but ITunes sure did.

    Feed43 is good technology. It brings content closer to the user. What it does not do is bring the content creator and the content consumer closer. Till it brings this value, its just reasonably good, hacky technology.

  5. Isn’t the problem the total user experience rather than technology? The analogy to napster is perfect.

    Forget the technology for a moment. Napster got killed but ITunes flourished. The idea is not whether you can digitize music, the key question is whether you can bring value the creator of music. Napster did not bring value to the content creators but ITunes sure did.

    Feed43 is good technology. It brings content closer to the user. What it does not do is bring the content creator and the content consumer closer. Till it brings this value, its just reasonably good, hacky technology.

  6. I see some illegally published feeds already:

    ProSportsDaily.com probably wouldn’t like this one: http://feed43.com/4601624585672733.xml

    FactCheck.org might not be a big fan of this: http://feed43.com/1716483175047651.xml

    I wonder what TownHall.com would think of their content being redistributed: http://feed43.com/4817051363607806.xml

    It’s only a matter of time before corporate websites become aware of their content being scraped and redistributed publicly to whoever wants to see it.

  7. I see some illegally published feeds already:

    ProSportsDaily.com probably wouldn’t like this one: http://feed43.com/4601624585672733.xml

    FactCheck.org might not be a big fan of this: http://feed43.com/1716483175047651.xml

    I wonder what TownHall.com would think of their content being redistributed: http://feed43.com/4817051363607806.xml

    It’s only a matter of time before corporate websites become aware of their content being scraped and redistributed publicly to whoever wants to see it.

  8. I see some illegally published feeds already:

    ProSportsDaily.com probably wouldn’t like this one: http://feed43.com/4601624585672733.xml

    FactCheck.org might not be a big fan of this: http://feed43.com/1716483175047651.xml

    I wonder what TownHall.com would think of their content being redistributed: http://feed43.com/4817051363607806.xml

    It’s only a matter of time before corporate websites become aware of their content being scraped and redistributed publicly to whoever wants to see it.

  9. Roger,

    Why do you differentiate online aggregators and desktop ones? If I use any online aggreagtor to watch scraped feeds personally, this does not violate any law either. Using online aggregator is not the same as publishing the content online. Syndicating scraped feeds (intentionally displaying them to other people) without prior permission of original copyright holder — this is where the problem begins.

    If any online aggregator allows indexing personal user pages, thus actually indexing content of the third-party copyright holder, this is a problem with online aggregator. I personally don’t know any aggregator that exposes personal user pages to public.

    To summarize: as long as you *read* the scraped content, not *publish* it, there is no violation, no matter which software, desktop or online, you use to read the content.

  10. Roger,

    Why do you differentiate online aggregators and desktop ones? If I use any online aggreagtor to watch scraped feeds personally, this does not violate any law either. Using online aggregator is not the same as publishing the content online. Syndicating scraped feeds (intentionally displaying them to other people) without prior permission of original copyright holder — this is where the problem begins.

    If any online aggregator allows indexing personal user pages, thus actually indexing content of the third-party copyright holder, this is a problem with online aggregator. I personally don’t know any aggregator that exposes personal user pages to public.

    To summarize: as long as you *read* the scraped content, not *publish* it, there is no violation, no matter which software, desktop or online, you use to read the content.

  11. Roger,

    Why do you differentiate online aggregators and desktop ones? If I use any online aggreagtor to watch scraped feeds personally, this does not violate any law either. Using online aggregator is not the same as publishing the content online. Syndicating scraped feeds (intentionally displaying them to other people) without prior permission of original copyright holder — this is where the problem begins.

    If any online aggregator allows indexing personal user pages, thus actually indexing content of the third-party copyright holder, this is a problem with online aggregator. I personally don’t know any aggregator that exposes personal user pages to public.

    To summarize: as long as you *read* the scraped content, not *publish* it, there is no violation, no matter which software, desktop or online, you use to read the content.

  12. Igor: Well, technically, you *are* redistributing content. (In fact, you’re creating a derivative work.) But your basic point has merit… the web is built on such stuff. As I said somewhere else, when viewed from a certain perspective, the entire web is one giant copyright violation.

    If used as a personal proxy, I don’t think there’s any legitimate argument in opposition to Feed43′s service. The problem is that far too many (and arguably most) people don’t use RSS via private desktop apps… they read their feeds on Planet portals, in search-engine-accessible online aggregators, and so on. In that context, Feed43 causes problems.

  13. Igor: Well, technically, you *are* redistributing content. (In fact, you’re creating a derivative work.) But your basic point has merit… the web is built on such stuff. As I said somewhere else, when viewed from a certain perspective, the entire web is one giant copyright violation.

    If used as a personal proxy, I don’t think there’s any legitimate argument in opposition to Feed43′s service. The problem is that far too many (and arguably most) people don’t use RSS via private desktop apps… they read their feeds on Planet portals, in search-engine-accessible online aggregators, and so on. In that context, Feed43 causes problems.

  14. Igor: Well, technically, you *are* redistributing content. (In fact, you’re creating a derivative work.) But your basic point has merit… the web is built on such stuff. As I said somewhere else, when viewed from a certain perspective, the entire web is one giant copyright violation.

    If used as a personal proxy, I don’t think there’s any legitimate argument in opposition to Feed43′s service. The problem is that far too many (and arguably most) people don’t use RSS via private desktop apps… they read their feeds on Planet portals, in search-engine-accessible online aggregators, and so on. In that context, Feed43 causes problems.

  15. Chris,

    There is nothing against DMCA here, because Feed43 does NOT redistribute content from other sites. Again, it is a *personal* *proxy* that allows the owner/creator of the feed view website content from within news aggregator. Do HTTP proxies violate DMCA? Do personal ad-blocking proxies violate DMCA? Something tells me they do not. Think about it.

  16. Chris,

    There is nothing against DMCA here, because Feed43 does NOT redistribute content from other sites. Again, it is a *personal* *proxy* that allows the owner/creator of the feed view website content from within news aggregator. Do HTTP proxies violate DMCA? Do personal ad-blocking proxies violate DMCA? Something tells me they do not. Think about it.

  17. Chris,

    There is nothing against DMCA here, because Feed43 does NOT redistribute content from other sites. Again, it is a *personal* *proxy* that allows the owner/creator of the feed view website content from within news aggregator. Do HTTP proxies violate DMCA? Do personal ad-blocking proxies violate DMCA? Something tells me they do not. Think about it.

  18. This is similar to something I started working on a few years ago. Actually, it started as a simple screen scraper before RSS was even invented. Unfortunately I never did get to release the application that allows users to define their own feeds, but there is a handful that I created that people have stumbled across on the web.

    http://www.simbolic.net/Software/HTML2RSS/

    There used to be another site around a couple of years ago, I think it was called MyRSS that had a database containing thousands of feeds. Anybody remember it or knows what happened to it?

    While I’ve always thought the idea was cool, I think as time goes on more and more sites will continue to add their own feeds, so the value of this stuff actually continues to decline (plus it’s a pain to keep those scrape definitions up to date). Hopefully one day every site has an official feed.

  19. This is similar to something I started working on a few years ago. Actually, it started as a simple screen scraper before RSS was even invented. Unfortunately I never did get to release the application that allows users to define their own feeds, but there is a handful that I created that people have stumbled across on the web.

    http://www.simbolic.net/Software/HTML2RSS/

    There used to be another site around a couple of years ago, I think it was called MyRSS that had a database containing thousands of feeds. Anybody remember it or knows what happened to it?

    While I’ve always thought the idea was cool, I think as time goes on more and more sites will continue to add their own feeds, so the value of this stuff actually continues to decline (plus it’s a pain to keep those scrape definitions up to date). Hopefully one day every site has an official feed.

  20. This is similar to something I started working on a few years ago. Actually, it started as a simple screen scraper before RSS was even invented. Unfortunately I never did get to release the application that allows users to define their own feeds, but there is a handful that I created that people have stumbled across on the web.

    http://www.simbolic.net/Software/HTML2RSS/

    There used to be another site around a couple of years ago, I think it was called MyRSS that had a database containing thousands of feeds. Anybody remember it or knows what happened to it?

    While I’ve always thought the idea was cool, I think as time goes on more and more sites will continue to add their own feeds, so the value of this stuff actually continues to decline (plus it’s a pain to keep those scrape definitions up to date). Hopefully one day every site has an official feed.

  21. This is great – just this past weekend, I was playing around with the RSS feeds on Craigslist, trying to think about the uses. I came up with this idea of generating “alerts” for people doing some apartment hunting in the Seattle area and the result was Cribot.com. All that took only a weekend to put together!

    This just goes to show how useful RSS feeds can be and how much information can be gleaned from them. Feed43 will just let you jump on the bandwagon more easily.

  22. This is great – just this past weekend, I was playing around with the RSS feeds on Craigslist, trying to think about the uses. I came up with this idea of generating “alerts” for people doing some apartment hunting in the Seattle area and the result was Cribot.com. All that took only a weekend to put together!

    This just goes to show how useful RSS feeds can be and how much information can be gleaned from them. Feed43 will just let you jump on the bandwagon more easily.

  23. This is great – just this past weekend, I was playing around with the RSS feeds on Craigslist, trying to think about the uses. I came up with this idea of generating “alerts” for people doing some apartment hunting in the Seattle area and the result was Cribot.com. All that took only a weekend to put together!

    This just goes to show how useful RSS feeds can be and how much information can be gleaned from them. Feed43 will just let you jump on the bandwagon more easily.

  24. “Personally, I’ll be blocking access to Feed43 URLs in my subscription code.”

    Some sites like sitespaces.net encourage RSS syndication, but some sites don’t like facebook. It should be up to the copyright holder, what content to syndicate and what not to.

    I’m also going to implement blacklisting.

  25. “Personally, I’ll be blocking access to Feed43 URLs in my subscription code.”

    Some sites like sitespaces.net encourage RSS syndication, but some sites don’t like facebook. It should be up to the copyright holder, what content to syndicate and what not to.

    I’m also going to implement blacklisting.

  26. “Personally, I’ll be blocking access to Feed43 URLs in my subscription code.”

    Some sites like sitespaces.net encourage RSS syndication, but some sites don’t like facebook. It should be up to the copyright holder, what content to syndicate and what not to.

    I’m also going to implement blacklisting.

  27. DMCA – copyright infringement.

    You can’t redistribute others’ copyrighted content without written permission and or licensing fees.

    Downloading © content from another website for redistribution is against the DMCA.

    This isn’t like digg or google news where there is a link to the story.

  28. DMCA – copyright infringement.

    You can’t redistribute others’ copyrighted content without written permission and or licensing fees.

    Downloading © content from another website for redistribution is against the DMCA.

    This isn’t like digg or google news where there is a link to the story.

  29. DMCA – copyright infringement.

    You can’t redistribute others’ copyrighted content without written permission and or licensing fees.

    Downloading © content from another website for redistribution is against the DMCA.

    This isn’t like digg or google news where there is a link to the story.

  30. Robert, What’s the “Legal Eagle’s” opinion?
    How a Web page is used by the end user(2nd Party) is based on the TOS of the publisher(1st Party). Does the 3rd Party’s TOS conflict with the 1st Party?
    Granted its for a useful purpose because there are a lot of sites without RSS because of ignorance and laziness and its because of this the 3rd party is able to provide a service. “Drawing the fine line” is important for the 2nd Parties legitimacy and sanity. So the REAL question is “Is it Legal?”

  31. Robert, What’s the “Legal Eagle’s” opinion?
    How a Web page is used by the end user(2nd Party) is based on the TOS of the publisher(1st Party). Does the 3rd Party’s TOS conflict with the 1st Party?
    Granted its for a useful purpose because there are a lot of sites without RSS because of ignorance and laziness and its because of this the 3rd party is able to provide a service. “Drawing the fine line” is important for the 2nd Parties legitimacy and sanity. So the REAL question is “Is it Legal?”

  32. Robert, What’s the “Legal Eagle’s” opinion?
    How a Web page is used by the end user(2nd Party) is based on the TOS of the publisher(1st Party). Does the 3rd Party’s TOS conflict with the 1st Party?
    Granted its for a useful purpose because there are a lot of sites without RSS because of ignorance and laziness and its because of this the 3rd party is able to provide a service. “Drawing the fine line” is important for the 2nd Parties legitimacy and sanity. So the REAL question is “Is it Legal?”

  33. Igor: Let me say quickly that I’m not casting aspersions on your motives. I’m sure you intend Feed43 to be a wholly positive service. In the short view, it *is* a positive service.

    Hell, I’m not even opposed to the concept behind Feed43. I don’t see anything wrong with an individual user of a desktop aggregator using it to subscribe to a feed-free site. I’m not worried about anyone’s ad-supported business model, as the AdBlock extension in my browser attests. My machine, my rules.

    But when it comes to web-based aggregators and syndication applications, Feed43 starts to look extremely problematic. It opens up aggregator developers to lawsuits based upon the actions of Feed43 users, and makes it easier than ever to redistribute content without authorization. Folks like me keep reminding content producers that they need to take responsibility for how they publish their material, and Feed43 removes one fundamental avenue of responsibility.

    Obeying robots.txt is a wonderful thing, and you should be applauded for it. In fact, that alone is enough for me to withdraw the “toxic” statement I made earlier… it demonstrates that you’re interested in playing fair.

    Perhaps you could take a leadership role in this situation? What if Feed43 evangelized an “all purpose” user-agent for scraping services, one that would make blocking (via robots.txt) a one-step process? In addition to obeying references to “Feed43 Proxy”, you could also respect references to “All Scraping Proxies”… other well-meaning service providers could do the same, and blocking those services as a whole would become pretty darned simple.

  34. Igor: Let me say quickly that I’m not casting aspersions on your motives. I’m sure you intend Feed43 to be a wholly positive service. In the short view, it *is* a positive service.

    Hell, I’m not even opposed to the concept behind Feed43. I don’t see anything wrong with an individual user of a desktop aggregator using it to subscribe to a feed-free site. I’m not worried about anyone’s ad-supported business model, as the AdBlock extension in my browser attests. My machine, my rules.

    But when it comes to web-based aggregators and syndication applications, Feed43 starts to look extremely problematic. It opens up aggregator developers to lawsuits based upon the actions of Feed43 users, and makes it easier than ever to redistribute content without authorization. Folks like me keep reminding content producers that they need to take responsibility for how they publish their material, and Feed43 removes one fundamental avenue of responsibility.

    Obeying robots.txt is a wonderful thing, and you should be applauded for it. In fact, that alone is enough for me to withdraw the “toxic” statement I made earlier… it demonstrates that you’re interested in playing fair.

    Perhaps you could take a leadership role in this situation? What if Feed43 evangelized an “all purpose” user-agent for scraping services, one that would make blocking (via robots.txt) a one-step process? In addition to obeying references to “Feed43 Proxy”, you could also respect references to “All Scraping Proxies”… other well-meaning service providers could do the same, and blocking those services as a whole would become pretty darned simple.

Comments are closed.