I just built a poc search engine / crawler for Twtxt. I managed to crawl this pod (twtxt.net) and a couple of others (sorry @etux and @xuu I used your pods in the tests too!). So far so good. I might keep going with this and see what happens 😀
@prologic@etux@xuu (#37xr3ra) Now I want to remove the “domain” restriction, add a rate-limit and try to crawl as much of the Twtxt wider network as I can and see how deep it goes 🤔
(#37xr3ra) @lyse@prologic very curious… i worked on a very similar track. i built a spider that will trace off any follows = comments and mentions from other users and came up with:
(#37xr3ra) @prologic yeah it reads a seed file. I’m using mine. it scans for any mention links and then scans them recursively. it reads from http/s or gopher. i don’t have much of a db yet.. it just writes to disk the feed and checks modified dates.. but I will add a db that has hashs/mentions/subjects and such.
(#37xr3ra) @prologic the add function just scans recursivley everything.. but the idea is to just add and any new mentions then have a cron to update all known feeds
(#37xr3ra) Wait… So you actually wrote a more elaborate crawler without taking a shortcut like I did using colly (not that it really helps much) Hmmm? 🤔 Can we take it a bit further, make a daemon/server out of it, a web interface to search what it crawls using bleve and some tools (API, Web UI) to let people add more “feeds” to crawl? 🤔
@prologic (#37xr3ra) sounds about right. I tend to try to build my own before pulling in libs. learn more that way. I was looking at using it as a way to build my twt mirroring idea. and testing the lex parser with a wide ranging corpus to find edge cases. (the pgp signed feeds for one)
@prologic (#37xr3ra) in theory shouldn’t need to let users add feeds.. if they get mentioned by a tracked feed they will get added automagically. on a pod it would just need to scan the twtxt feed to know about everyone.