I want to take a deeper look at The Feed Directory at Pine.blog: how it works and is it useful?

I am not going to spend time in this review on the free feed reader and paid blog hosting aspects of Pine.blog – you can discover them for yourself.

The Feed Directory is a logical compliment to the feed reader because it helps users find useful, interesting, quality RSS feeds that they can then subscribe to and read in the reader.  The logic being that if you want people to use your feed reader you need to make it easy for users to discover good feeds.  It works for that.  Keep in mind that a whole generation of web users have grown up never using a feed reader, RSS, directories, or even reading independent blogs, so Pine.blog is making it as easy as possible for these these folks to onboard and get started.  It’s a Good Thing.

A Look at the Feed Directory.

When you hit the directory index page you see a search box, some Featured Feeds and then a list of broad categories with feed examples in each.  So far it looks like a conventional, old school directory, where you can either search or drill down through the categories to find what you want.  But you would be wrong to assume that, the categories only list “featured feeds”, you cannot browse through all the listings in a category only those that have been featured by either an editor or followers of that feed.  This creates some confusion: old time directory users expect to be able to find all listings under a particular category and you don’t get that, also it makes the directory index appear to be smaller than it really is.

While confusing, this is not automatically a bad thing.  1. it provides a quick starter selection, 2. feeds that are exceptional get rewarded by human users for being exceptional.

It’s All About Search

The power user secret, behind The Feed Directory (TFD), is in using the search functionSearch gives you access to the entire index.  TFD spiders the actual content of each feed, this could be 5, 10 or more posts.  This makes it much more like the old time RSS search engines Technorati and IceRocket and less like old time directories (eg. Indieseek.xyz, Dmoz).  TFD’s spider also goes back and respiders each feed at set intervals to index the content of newer posts.  (As I write this, I don’t know if TFD keeps older posts that have dropped off the feed and if so how far back they go. Hopefully they do.)  It’s this spidering of content that sets TFD apart and makes it exciting.  This is much more powerful than conventional directory searches like Indieseek.xyz.  Bottom Line: use the search form!

Search Results

When you perform a search you get a SERP with a list of feed titles (blog titles) that, somewhere, contain your search term within the feed.  You don’t see a fragment of text containing your keyword like you do with Google, so you have to click through to see the whole feed.  Frankly this is probably good because you get a better idea of what the blogger writes about by seeing numerous posts so you can make a more informed decision before you subscribe.  Being so used to search engines like Google and Bing, one might find it a bit frustrating.

Bias Towards the Recent

By their very nature, feeds only show the most recent posts.  So just like RSS search engines of the past, TFD is going to have a bias towards more recent posts.  Yet it’s not trying to be a breaking news search engine.  One should just keep this in mind.

Openness of Search and Listings

It is to Pine.blog’s credit that they made TFD pretty open for everyone to use and to submit their feeds to. Or course they make it super easy to add a feed to your Pine.blog timeline (reader) but searchers who use a third party or self hosted reader can also use the search function to find good feeds and with a tad more work can add those feeds to a feed reader of their choice.  Win – win.

Part of that openness is allowing, anybody that registers for free, to submit their feeds – subject to editor review.  TFD does not only list blogs hosted by Pine.blog.  It is not a closed ecosystem like Facebook and Twitter.

Providing a search API is another part of the openness.

Not a Search Engine

TFD, even in it’s early stages, is so slick you might start thinking of it as you would a search engine.  But it still is a directory even though it spiders content.

  1. Feeds submitted are subject to review by a human editor.
  2. The directory does not search the web for feeds.  TFD isn’t going to just find you, you have to add your feed URL.  This means if you want your blog’s feed to be included you need to submit it.
  3. The bias towards more recent posts (see above.)  It will not have the depth of a fully spidering web search engine.

None of the above are negatives.  In fact, human edited directories are a plus.

Conclusions

I’m on record for wanting a new Technorati or Icerocket RSS search engine.  TFD is a really good start.  It is not perfect but it is kind of a big deal, one would think the blogging community would be burning up the pixels talking about it.

I highly recommend all active bloggers should add their feeds, because someday this will be a great way to attract readers.  The index is rather small right now and it will only get better with  more feeds listed in it.   I also recommend people use the search function on The Feed Directory to find good blogs to read, wherever or however you read RSS feeds.  Heck, I may add it to Vivaldi browser as a search engine.

This was also posted to
/en/linking.

 

 

Liked this post? Follow this blog to get more. Follow

Replied to

Reply to: Bookmarked Personal sites are awesome by Andy Bell (personalsit.es).

It is an inventive use of Github.  I definitely like directories of personal sites.  Personal site directories are best populated by webmaster URL submissions, because it’s hard for an editor to figure out the themes of a site in a timely manner, and one problem here is that there is a lot of friction to submitting a site, although the email workaround is thoughtful.

I can’t help but wonder about building a similar directory site that aggregates its data by Webmention and uses the h-cards from websites to automatically update itself.

Hrrm.  What happens if the directory admin finds a simply stunning site that has no webmention or h-card capability?  Do we abdicate our human judgement and automatically disqualify a more than worthy site because it does not have a barcode we like?  I always have a problem with using just code up front as a criteria for inclusion.  Or perhaps my bar is set too low: 1. it has a web address, preferably it’s own domain, 2. the site renders and is readable to humans, 3. the site has good content.  I do think a directory such as you describe is worth trying as a long term experiment.

The strength of a directory over a search engine is the human editing. Search engines cannot measure quality, only popularity.  Why give up, even partially, one of your few advantages?

The second thing, I don’t think a directory in 2019 can rely on webmaster submissions alone to grow the directory.  A whole new generation of webmasters have come of age without directories, webrings and the like.  They don’t understand the need to submit (silos), they don’t know they should submit their URL, they don’t know how to submit URL’s, they don’t know how a directory works or how to search one.  I think any new directory has to just offer URL submission, but in the end go out and find good sites.

Liked this post? Follow this blog to get more. Follow

Replied to

Reply to: Bookmarked NowNowNow (nownownow.com)

 

First, thank you for bringing this to my attention.  I have listed NowNowNow in the Hyperlink Nodes Directory as a niche directory.

Second, Yes!  This is exactly what I’ve been yammering on about with decentralized search (or decentralized discovery, if you will.)  It’s not without some issues as you have pointed out and one issue is – will it scale, but it’s a nice effort that can be built upon.  Also it’s about individual people, which is what the Web is about.

Third, It’s a nice fresh approach to a blog directory, presuming it’s mostly blogs that have a /now page.  If you say “blog directory” everybody yawns, if you say /now page directory everybody perks up with interest.  /now pages are a good hook for personal blogs.

I’m curious how we might expand this sort of concept to other types of online directories? Is there anything else useful about how this is one is set up?

 

Going Conventional

You could replicate this with a good commercial Php directory script: just insist that the primary URL submitted be to the /now page and add whatever custom fields you need for bio information.  This would allow submission from the website instead of the two step email process.  It would also give the directory admin tools like dead link checking that help maintain the directory over time, plus a search function, the option to add hierarchical categories or leave it flat, the ability to sort listings in different orders, and an RSS feed of new listings.

A specialized personnel or membership directory script could be adapted too.

Custom/Future

What is swirling around in my head is some sort of fusion of NowNowNow, Microcast.club and webmentions like href.cool can send, plus a conventional directory script for those backend admin tools.

  • I like the webring aspects to Microcast.club: 1. it helps keep the directory current by requiring a bit of code on the page listed, if the code disappears you drop out. 2. Lets people optionally surf like a webring, 3. provides a link back to the directory – two way linking provides more traffic for everyone and frankly helps the directory to rank in search engine results too.  Downside: it is a mandatory reciprocal link which I’m not totally comfortable with.
  • href.cool sends a webmention to all sites that get listed.  Truly an indieweb directory by design. Trackback sending would be a nice backup to that because not everybody has webmentions.
  • NowNowNow provides submission which is 1. human reviewed, plus 2. URL submission that is more democratic than Microcast.club – any site with a /now page can submit. I say this even though I see email submission as problematic.
  • A conventional directory script would provide search, URL submission and other sorting options which would make things scale better and be maintained better long term by one admin.

Yes we need a 21st Century rethink of the conventional php directory script.  The above are some elements that could be incorporated.

Liked this post? Follow this blog to get more. Follow

These are notebook type thoughts.  I’m developing my thoughts.

I have been thinking of a directory of directories and how that would work.  There is no value in just listing a bunch of spammy directories built only for SEO purposes.  And I have already listed the good traditional directories in Indieseek.

However, what if I got away from what I normally consider a “directory” and started thinking about hyperlink nodes – collections of hyperlinks?

  • Good Directories – fellow travelers with what Kicks Condor and I are trying to do.
  • Good Indie Search Engines – with their own index.
  • Blogrolls and Following Pages – extensive ones.
  • Link Pages – extensive ones.
  • Niche Directories – many are built using pages on a blog but I keep running into them.
  • Webrings – functioning ones.  (Maybe ??)
  • Other – You know them if you look close, but they may not be obvious.

The first two bullet points, I have or can easily cover here on Indieseek, but the later points from Blogrolls down I don’t think have ever been mapped out.  It’s a bit like cataloging the forgotten notebooks of the web.

The bulleted list above would probably make a good set of top level categories.

Value

The second question is: does this have any value?  And for that I’m not sure.  I could see somebody using the Blogroll category to find blogs that other individuals are following. Each blogroll is a word of mouth recommendation.  And that’s the thing, these relatively small hyperlink nodes, mostly all have humans making them and that has a value – somehow.  This gets down to the grass roots of the Web.  The Web, by definition, is about hyperlinks and linking – by humans for use by humans, not algorithms.

I don’t think it will ever be huge. but may over time end up being larger than I anticipate.  Which is why I’m leaning toward using a directory script vs. trying to do this on the wiki.  Easier to maintain on a purpose built script.

I don’t think it will get used a lot by the public.  I don’t think the public will understand it.  I don’t really care about either of those.  Some people collect rocks, I collect links. *Sigh*  At the minimum this would be a place to keep bookmarks of these hyperlink nodes.

Seems like this could be a good place to store these links publicly since I keep finding them.

This was also posted to
/en/linking.

 

Liked this post? Follow this blog to get more. Follow

I’m still convinced that the most fruitful area for web directories is in niche directories, that is single topic directories.  For example: it could be about fishing, bass fishing, fly fishing, keeping tropical fish, or fishing in your area.*  (*Fishing in your area is more of a business type directory with addresses and maps, but that’s great too.)

What you are doing in a niche is: A. establishing yourself as an expert on that topic as it exists on the web, B. you want to either draw people in because of your knowledge, or C. draw in a community of people that share the same interest.  It’s the people of B or C that will use the directory.

But there are some things you need to look out for:

  1. You need to really like the topic yourself and have some knowledge about it.  Directory building is hard work and you need to stay interested.  You don’t have to be an expert, if you are willing to learn your expertise will grow as you build
  2. You need a topic that people are passionate about.  Passionate enough to make websites and blogs on the topic, not just post to Facebook.
  3. The topic needs to be filled with arcane knowledge that people want to share and find.  It can’t be a topic that one Wikipedia article can cover everything people want to know.  Topics like pen and paper role playing games, US Civil War history, various collecting hobbies, UFO’s etc. are all topics that amateurs write about and endlessly discuss and debate.  Those are the kinds of topics that generate blogs and websites.
  4. The topic needs to be broad enough to collect an audience, if you go too narrow you won’t find enough websites to list.
  5. You are going to need more than just a directory.  Combine your niche directory with a blog, wiki, knowledge base, big site of static pages, forums, maybe even three of these.  You need some other content that can be added to and updated to combine with your directory.  This is 2018 and  people don’t seek out directories like they used to,

Except for mentioning “fishing in your area” I’m not going to go into local business directories yet.  They are a slightly different animal and I’ll save that for another post.

Finally, don’t compete directly against Google and Bing.  Don’t fill your directory up with the same sites the search engines have on the first 3 pages.  Dig deeper.  Bring to light websites that the search engines overlook but are good sites.

If you have any questions just ask in the comments below.

This was also posted to
/en/linking.

 

Liked this post? Follow this blog to get more. Follow

Replied to

In reply to: Federated Wiki and Directories by Kicks Condor

 

 

Federated Wiki

 

I like the idea, but like the author says, it’s best for working on one narrow subject for each instance which would mean I, being a generalist, would end up with 20 or 30 Fed wiki’s. Gah!

I think we start talking about federating our directories

 

This could help spread the net wider plus leverage two indexes for more depth.  Sounds like a portal page, and the public might find that more useful.

 

central search engine

 

There are a couple of good off the shelf, open source search engines that can crawl across multiple domains.  My suggestion would be to try those first, see what the bugs, limitations are before spending the time coding something new.  This might be the easiest thing to execute.

Another option might be a metasearch script.  It the old days there were several types: 1. Parallel search: this was the easiest, a search would simply return the top X number of results from engine1 then engine2 then engine3 etc.  No attempt to blend the results or change the ranking so it was a easier scrape to code.  2. returned results and simply showed you a list of engines that had that same site in the first X results.  Again these were scrapes.  I don’t think these early meta search engines had any algo, you just looked at the result and if 2 or 3 engines all had that site on the first page it was probably decent.  3. The most sophisticated meta search engines use API’s and there really isn’t much we can do with that.

My point is #1 Parallel search might be doable.  There may be some open source metasearch scripts around that could be adapted.

Portal: I think I will throw around portal ideas in another post rather than clutter up this one too much.

 

categorization systems

 

This could be as simple as a spread sheet like table cross reference: this category on KicksSearch corresponds most closely with this category on Indieseek.  And a reverse one.  Each cat. name being a link to that category. Might encourage deeper browsing.  Also simple HTML.

Or it could be a flow chart but in any event keep it simple and clicky. (Portal)

 

collecting submissions

 

I’m getting a brain flood of proto-ideas:

  1. Something like Indieweb.xyz?
  2. Twitter submit:  Put out a Tweet (these could be themed) asking for submissions all they have to do is retweet with a URL they want to submit.  Cherry pick the best.  (Heck we could do this right now since both blogs and Twitter support webmentions.)

These could be important, because there seems to be some sort of resistance to submitting URL’s right now.

syndicate entries

This may not be exactly what you are thinking of but:

  1. RSS feed: the directory already makes this of the 5 newest sites added.
  2. Toplist Generation: I know I can generate just about any type of top list available (ie. Top 10 Categories, Top 5 links by Clicks, Top Rated, Newest Comments, etc.)  There is also a way to export these top lists via JavaScript.  I have no idea how this all works but it’s there.

Directory of Directories?

This may be only useful in the future.  I know we have slightly different definitions of directories, but we could easily make a directory of directories if there are enough quality sites worth indexing.  I just mention this because my license for WSNLinks lets me use as many copies as I want, with full support, so long as they are on the domain Indieseek.xyz or a subdomain of it.  Just FYI.

Liked this post? Follow this blog to get more. Follow

I need to capture something I suggested on Micro.blog yesterday before I lose track.  Fortunately Kick’s has a copy:

Question: Wiki’s as we currently know them are more about writing (knowledge base) connected in a non-linear fashion and minimal heirarchy, so can one make a wiki/directory – a fusion of the two forms? Not quite a wiki as we presently know it but not quite a directory either, but still a portal that lists other websites and is used to navigate the web. It would need to be searchable. Next question, would it be useable, or to put it another way would the man on the street understand it? Can it scale?

I think this is something to think about.

Source and my reply.

 

I suspect Kick’s has already thought of this. I swear the guy is a walking ideavirus.  But it’s new to me so I get to play with the idea.

What I’m thinking of is a wikidirectory which would index outside links but be capable of cross indexing, the way wiki’s do, within the listing’s description field.  You could also have a field below the description for Editor’s notes.  (ie. “This is yet another niche directory run by BradEnslen which means it is all done in GSA gray and boring to look at.”  Wherein the words in italics would lead you off to a page on either niche directories or a page on BradEnslen which would have a linked bibliography of all known web sites made by said BradEnslen.)

It’s really interesting but it would be a nightmare for one person to maintain if it had any size.

Still there is a germ of something interesting there.

Liked this post? Follow this blog to get more. Follow

And why it matters.

It matters because the first 5 listings are going to be clicked on most.  Just like in a search engine, where the top 3 listings for a query get the most clicks. (Thus spawning the whole SEO industry.)

Alphabetical: The oldest way to rank listings in a category is alphabetically.  Which is great if your website’s name starts with the letter “A”.  The alphabetical thing started a whole trend back in the Yellow Page phone book days of naming businesses “Acme”, “Ace”, “AAA Exterminators” etc. to game the listings and be at the top.  That is the whole downside of alphabetical.

X Rank: Another way to rank listings is using a third party ranking factor like: Page Rank, Alexa Rank, etc.  The problem with this is the rankings are all based upon popularity.  So if you put the most popular sites first they stay the most popular.  That too does not seem fair, especially for new sites that have no rank.

Click Rank: Another factor that used to be popular was Click Rank.  Sites that got clicked on the most rose up in the ranks.  Again the problem was once they got to the top they tended to stay there forever.

Rating and Comments:  Many directories have a formula that will rank sites by user rating (usually stars) and/or how many comments the listing gets.  This never really worked out.  Webmasters always tried to game the system.  Also, in search, most people don’t bother to rate or comment, they just want to find answers to their query as quick as they can.

Editor’s Rating:  This is a subjective rating given by editors of the websites within a given category.  This can work well when you have expert editors taking care of subject that they are experts in.  But it falls apart quickly when you have one guy who is a Generalist, trying to judge sites on subjects he/she knows little about.  However, if you do have expert editors, this might be the best of the lot.

Here at Indieseek.xyz I use Alphabetical ranking in categories.  I could use the others but alphabetical keeps it simple and does not add to server load.

 

This was also posted to
/en/linking.

 

Liked this post? Follow this blog to get more. Follow

Right now, in the West, web search is highly centralized.  About 90 percent is in the hands of one company with one index of the web: Google.  The other 10 percent is split between Bing with it’s own index and a few others.  But wait it gets worse, most of the “few others” use Bing’s index and some use Google’s index.  Only a couple of other, very small, search engines actually crawl the web and have their own search index.

Both major search engines are really set up to find brand name, corporate, commercial web pages, or at least, they favor those.  The non-commercial blogs, personal sites, the fun websites the Indieweb movement wants to encourage, don’t get the same sort of traction.

So we have too few search engines which are not focused on independent, non-commercial websites. What do we do?

IMHO one way to fix this is Decentralizing Search:

  1. Have 8 major search engines with their own indexes.  We may be moving toward that but progress is slow and the Web so vast that it is very expensive to set up and run a major web crawling search engine. This won’t happen overnight.
  2. Have many small directory type search indexes.  There are still enough good directory scripts around that one can set up a directory, with a lot of features, at a very low cost.
  3. Blogrolls, link pages and maybe webrings.  We used to surf the web, we can learn to do it again.

Number 2.

It’s #2 above I want to discuss.  If we create many hundreds of directories and search engines, large and small, all dedicated to the listing of interesting or fun independent web sites and pages, then  we create our own discovery network.  Each index is a unique collection, presented differently from one another it helps break the dependency on Google and Bing in the near to mid term, at least until more major search engines get established.

And by “directory” I mean: A. a directory intended to help navigate the web NOT to sell links for SEO; B. traditional directories like Indieseek.xyz, also newer types of directories running hand coded scripts, hybrid directories that somehow incorporate a crawling spider, directories that incorporate webrings, local business directories, niche directories, even automated directories of some sort.  Whatever kind of index human ingenuity can invent.   By this means we start taking back the web and remaking it into something we can better enjoy.

This is not going to replace big crawling search engines.  And one directory, even 100 times the size of Indieseek, is not going to make a difference,  but large numbers will.  If everywhere we throw a stick we hit some directory people will start exploring, just like they explore bookstores, libraries or snoop through the bookshelves at a friends house.

20 years ago, when all the search engines sucked, I and others searched for websites with a battery of maybe 10 favorite search engines and directories in our browser bookmarks.  It’s not hard to do that again IF the tools are available.  That is my vision of decentralized search.  We don’t have to wait passively for others, big companies and venture capital, to solve the problem for us, we can do it now. Ourselves.  That is the beauty of the Web.

Indieseek.xyz is my stab at a proof of concept.  I know Kicks Condor is working on his own directory vision and that other concepts like Wiby.me are out there.  Coders take a stab at it. If we can do this, you can do this.

 

This was also posted to
/en/linking.

 

Liked this post? Follow this blog to get more. Follow