4 points | by fullstacking 2 days ago
3 comments
This is great. It would be even more useful if it included the HN id, like at the end here.
https://news.ycombinator.com/item?id=44662452
I had not thought that save that data since I had no use for it. My scraper only gets the url and then hit that url to get all the other fields of data manually. Sorry.
Top 10 domains submitted in this was.
[Domain, link counts]
github.com 7819
www.youtube.com 3990
hackaday.com 1590
www.theguardian.com 1553
en.wikipedia.org 1509
arxiv.org 1317
www.theregister.com 1196
arstechnica.com 1001
www.nytimes.com 967
medium.com 910
This is great. It would be even more useful if it included the HN id, like at the end here.
https://news.ycombinator.com/item?id=44662452
I had not thought that save that data since I had no use for it. My scraper only gets the url and then hit that url to get all the other fields of data manually. Sorry.
Top 10 domains submitted in this was.
[Domain, link counts]
github.com 7819
www.youtube.com 3990
hackaday.com 1590
www.theguardian.com 1553
en.wikipedia.org 1509
arxiv.org 1317
www.theregister.com 1196
arstechnica.com 1001
www.nytimes.com 967
medium.com 910