Hotline is a new style webring so you don’t have to do a lot of registration to join. In fact, this is one of the easiest webrings to join – you only have to do 2.5 simple steps and it just works. The webring code is straight HTML which you create, just follow the instructions. This is nice because the ring code does not take up much space. I have no idea how it’s coded but that hardly matters because it just works.
There is no subject theme for the webring but it mostly appears to be made up of personal blogs, websites and some cool retro Web 1.0 type sites. You will find a list of all sites in the ring at the link above, so you get an idea of who you are hanging out with.
It’s young, it’s fresh, it’s got a pink background. Why wouldn’t you check it out?
Over the past year I’ve been impressed with how much the Indieweb.org Wiki has improved. Heck, it was good when I first saw it but members are very active and keep editing, improving, adding on and tweaking, relentlessly so it just gets better.
Somebody in one of the Indieweb chat channels, recently suggested the Indieweb wiki would make a good seed site (aka starter crawl) if one were starting to build a new search engine index. This is something I’ve been thinking about, off and on, for a couple of weeks and I have to say I agree: the Indieweb.org wiki would make a good seed site for a web search engine.
What’s a “seed site”?
Briefly, a seed site, or starter crawl, is a site (or one site of several) that a search engine crawler would index to find a wide variety of worthwhile pages to index. The crawler would index those URL’s found and then index more pages on those sites and in the process discovering URL’s that they lnk to and on and on. In the old days the Yahoo directory and Dmoz directory were considered prime seed sites for search engines. Later Wikipedia came along and is still considered an important seed site for outbound links. These sites were considered prime starters, in part because the outbound links had all been reviewed by human editors so a certain level of quality could be presumed.
If you want to learn more about seed sites I suggest reading Bill Slawski’s article: Seed Sites for Search Engine Web Crawls, which is worth reading if you are interested in the topic. The comments are worth reading too.
Why Use the Indieweb.org Wiki?
First, there is a wealth of good information on the wiki pages alone even without crawling the outbound links. Second, crawling the outbound links.
Now I would not use the Indieweb wiki as my only seed unless I were, somehow, creating an Indie Web only search engine but I think it would be good as part of a mix of other seed sources. The Web has changed in the last 20 years, commercial sites have taken over and the wiki’s outbound links lend a certain, needed counter balance to the commercial.
The outbound URL’s are human curated. This minimizes low quality content.
Links to some quality content that might take awhile to find by other means. Namely, a lot of quality blogs which also link out freely.
Wiki pages act as tags. This isn’t quite as useful for a search engine as a full directory taxonomy of categories, it is useful.
The wiki is not outrageously huge. Make no mistake, it’s big and growing bigger, but it’s dwarfed by something like Wikipedia and easier to digest within bandwidth limits of a startup.
It’s constantly being updated. This makes it a good source for re-crawls because new links are constantly being added.
Other Good Seed Sites:
Curlie.org – is the successor to the old Dmoz (Open Directory Project). Volunteer editors have been working on cleaning out dead links for a couple of years and possibly adding new listings so it’s not quite as dead as one might think. For somebody starting a web search engine, it’s hard to ignore 3 million or more listings. Said listings may be older sites but I’d gamble that the quality is better than links from Twitter or Facebook would be. Plus that taxonomy. I would not spend time re-crawling for new URL’s after the initial starter crawl.
Wikipedia – they don’t quite link out as freely as they once did but this is much more up to date.
Reddit – or at least large parts of Reddit. It’s big and diverse. Sub-Reddits act as tags. Constantly expanding with new links. Helps you determine what is new and popular. This is a good place to start a crawl and to re-crawl for new links. Reddit was suggested to me by some very experienced SEO’s when we were discussing this topic. I trust their judgment.
Indie Map – Maybe. I’d include it as a starter but would skip re-crawling.
Hacker News – Maybe for a seed crawl. I would try to tap HN including the comments for new fresh links.
Pinboard – constantly updating bookmarks. This would make a good seed site.
Agree or disagree on Indieweb.org as a seed site? Can you think of other seed sites I’ve missed? If so, leave a comment below. Thanks.
Search is going to change in March 2020 for Android users in the EU. In March when you buy and setup a new Android phone, EU users will be presented with 4 choices for setting the default search engine on that phone. Google will be one choice and Google has agreed to auction off the other 3 slots (so they can cash in). (I don’t know if this effects older Android phones that upgrade to a newer version of Android.)
All this is important, because a certain percentage of people will choose a search engine other than Google. Also, once chosen, people rarely change the default search engine on their devices. This presents a huge chance for the alternative search engines to gain some recognition and market share within the EU. This is a big deal.
DuckDuckGo won a spot in every country. This is good, but can they keep users, because I’ve heard their search results can be weak in some non-English searches? This is where DDG’s sole reliance on Bing for the bulk of their results might be a liability. Can Bing and therefore DDG provide satisfactory results in all European languages?
Info.com (the old Infospace.com) won in every country too. Not a real good choice. Kind of a waste of a slot and it shows the weakness of the auction model.
Qwant won in most major EU countries. This is good. Qwant uses Bing for English language searches, but they have their own crawler and index for French, German, Italian and Spanish. I hear their results in French are quite good so Qwant stands a chance of gaining users here.
PrivacyWall who are they and where did they come from? I think they have their own index, which appears small. They better crawl like crazy between now and March.
GMX is just a Google retread.
Regional search engines: Yandex (Russia) and Seznam (Czech Republic and Slovakia) are already dominant in their home languages so I expect they will pick up even more market share in this.
Of course Google is trying to subvert the intent of the EU regulators by making this an auction to the highest bidders. It’s legal, but it proves the point that Android is open source in name only, a fiction, whereas it’s really totally under Google’s control. Placement only for the highest bidders robs startups of badly needed operating and R&D funds and cripples charity based search engines from engaging in their charitable work.
Money should not be the only deciding factor. Still this is a rear guard action on Google’s part. The walls of Google’s search monopoly with Android have been breached and will this allow newer EU based search engines to come along?
My prediction is that both DuckDuckGo and Qwant will win some additional market share in Europe with this. Both of those search engines have enough comprehensive features like Maps, Wikipedia etc to compete in the mobile market. I think they can retain users who try them. I don’t see that happening with Info.com, PrivacyWall or GMX, but maybe they will rise to the occasion, add features, and meet users long term expectations.
It appears new auctions will occur ever 3 to 4 months, it will be interesting to see how the lineups shift over time.
Of course none of this is available in the US, or most of the rest of the world. In the US, Google retains it hold over Android and at the pace US trust regulators are working I don’t expect to see any significant opening up for a long time.
Get your popcorn out, this is going to be interesting.
This lets you set up, and manage your own webring on Github. I presume you need to be a Github member to do this and know about things like pull requests (which I don’t). Points to creator Max Bock for innovation on figuring this out.
I want to take a deeper look at The Feed Directory at Pine.blog: how it works and is it useful?
I am not going to spend time in this review on the free feed reader and paid blog hosting aspects of Pine.blog – you can discover them for yourself.
The Feed Directory is a logical compliment to the feed reader because it helps users find useful, interesting, quality RSS feeds that they can then subscribe to and read in the reader. The logic being that if you want people to use your feed reader you need to make it easy for users to discover good feeds. It works for that. Keep in mind that a whole generation of web users have grown up never using a feed reader, RSS, directories, or even reading independent blogs, so Pine.blog is making it as easy as possible for these these folks to onboard and get started. It’s a Good Thing.
A Look at the Feed Directory.
When you hit the directory index page you see a search box, some Featured Feeds and then a list of broad categories with feed examples in each. So far it looks like a conventional, old school directory, where you can either search or drill down through the categories to find what you want. But you would be wrong to assume that, the categories only list “featured feeds”, you cannot browse through all the listings in a category only those that have been featured by either an editor or followers of that feed. This creates some confusion: old time directory users expect to be able to find all listings under a particular category and you don’t get that, also it makes the directory index appear to be smaller than it really is.
While confusing, this is not automatically a bad thing. 1. it provides a quick starter selection, 2. feeds that are exceptional get rewarded by human users for being exceptional.
It’s All About Search
The power user secret, behind The Feed Directory (TFD), is in using the search function. Search gives you access to the entire index. TFD spiders the actual content of each feed, this could be 5, 10 or more posts. This makes it much more like the old time RSS search engines Technorati and IceRocket and less like old time directories (eg. Indieseek.xyz, Dmoz). TFD’s spider also goes back and respiders each feed at set intervals to index the content of newer posts. (As I write this, I don’t know if TFD keeps older posts that have dropped off the feed and if so how far back they go. Hopefully they do.) It’s this spidering of content that sets TFD apart and makes it exciting. This is much more powerful than conventional directory searches like Indieseek.xyz. Bottom Line: use the search form!
When you perform a search you get a SERP with a list of feed titles (blog titles) that, somewhere, contain your search term within the feed. You don’t see a fragment of text containing your keyword like you do with Google, so you have to click through to see the whole feed. Frankly this is probably good because you get a better idea of what the blogger writes about by seeing numerous posts so you can make a more informed decision before you subscribe. Being so used to search engines like Google and Bing, one might find it a bit frustrating.
Bias Towards the Recent
By their very nature, feeds only show the most recent posts. So just like RSS search engines of the past, TFD is going to have a bias towards more recent posts. Yet it’s not trying to be a breaking news search engine. One should just keep this in mind.
Openness of Search and Listings
It is to Pine.blog’s credit that they made TFD pretty open for everyone to use and to submit their feeds to. Or course they make it super easy to add a feed to your Pine.blog timeline (reader) but searchers who use a third party or self hosted reader can also use the search function to find good feeds and with a tad more work can add those feeds to a feed reader of their choice. Win – win.
Part of that openness is allowing, anybody that registers for free, to submit their feeds – subject to editor review. TFD does not only list blogs hosted by Pine.blog. It is not a closed ecosystem like Facebook and Twitter.
Providing a search API is another part of the openness.
Not a Search Engine
TFD, even in it’s early stages, is so slick you might start thinking of it as you would a search engine. But it still is a directory even though it spiders content.
Feeds submitted are subject to review by a human editor.
The directory does not search the web for feeds. TFD isn’t going to just find you, you have to add your feed URL. This means if you want your blog’s feed to be included you need to submit it.
The bias towards more recent posts (see above.) It will not have the depth of a fully spidering web search engine.
None of the above are negatives. In fact, human edited directories are a plus.
I’m on record for wanting a new Technorati or Icerocket RSS search engine. TFD is a really good start. It is not perfect but it is kind of a big deal, one would think the blogging community would be burning up the pixels talking about it.
I highly recommend all active bloggers should add their feeds, because someday this will be a great way to attract readers. The index is rather small right now and it will only get better with more feeds listed in it. I also recommend people use the search function on The Feed Directory to find good blogs to read, wherever or however you read RSS feeds. Heck, I may add it to Vivaldi browser as a search engine.
A new local directory and blog has launched for Lake Orion, Michigan area at LakeOrion.info.
The blog has announcements and community interest posts.
The directory is a proper business directory with addresses, location maps, telephone numbers and URL’s if the business has a website.
This is one of the slickest local directories I’ve come across. I’m impressed with the layout and the information presented in a neat format. It showcases what you can do with a local directory and how useful it can be.
It’s been a busy Late Autumn but we managed to make a few changes and improvements to the directory in late 2019.
800 listings. Yeah we hit 800 with all sorts of new listings added. More to come. I’ve been removing dead listings (dead wood) as I find them, and there has been some. 800 links isn’t a lot in the big scheme of things but then we are not trying to be big or pretty: just useful, real and sometimes entertaining.
Added climate change category. This has not generated a lot of interest yet, but heck it’s an important topic and a handy place to find over 30 useful sites on the topic – all in a bunch.
Changed the way search works on the directory. I think this works much better because it makes it easier to find a category that matches you subject faster.
Added highlighting to search results. So now your search keyword(s) are highlighted in yellow.
Technorati ordered blogs by recency and relied on delicious and flickr for better tag results. Even if I did rebuild it you probably wouldn’t like it.
I miss Technorati and other RSS search engines. However, Kevin brings up a good point: Technorati was developed at a time:
Before Twitter and Facebook which post breaking news regularly in their timelines;
Mainstream media was still publishing to the Web on a print-like timetable, mostly once or twice a day;
Search engines like Google still took some time to find, process and rank new web pages (posts).
So Technorati’s focus on “recency” was appropriate for the time, but not really needed now. (Although I’d still like to see a good RSS search engine today anyway.)
What we need is a search engine that would deep spider only non-commercial, independent blogs or filter out all the commercial crap and spam, so that you are only getting results from bloggers. (Yes the big spidering search engines sort of do this, but you are unlikely to find individual blog posts on the first 5 pages of results for any popular keyword search.)
As I see it, what we need is depth in such an index with relevancy rather than recency as a ranking factor, although recency could be a secondary factor. Plus you need to filter out the spam blogs. This implies an algorithm of some sort.