fix: Deduplicate Source.articles by normalized URL and content fingerprint#718
Draft
fix: Deduplicate Source.articles by normalized URL and content fingerprint#718
Conversation
Co-authored-by: AndyTheFactory <863810+AndyTheFactory@users.noreply.github.com> Agent-Logs-Url: https://github.com/AndyTheFactory/newspaper4k/sessions/0e11cc93-3086-46a3-8a35-84b5043bc0ae
Copilot
AI
changed the title
[WIP] Fix duplicate articles on news source due to subdomain omission
fix: Deduplicate Source.articles by normalized URL and content fingerprint
Mar 21, 2026
AndyTheFactory
requested changes
Apr 11, 2026
| str: A hex-encoded SHA-256 digest of the concatenated title and | ||
| text. | ||
| """ | ||
| content = (article.title or "") + (article.text or "") |
Owner
There was a problem hiding this comment.
@copilot replace tabs and non-breakable space with space, remove multiple spaces and any punctuation. convert all to lowercase. before computing hash
Contributor
Author
There was a problem hiding this comment.
Done in commit 224dcc5. Before computing the hash, the combined title+text is now normalized: tabs and non-breaking spaces (\xa0) are replaced with regular spaces, multiple consecutive spaces are collapsed, punctuation is stripped, and the result is lower-cased.
Agent-Logs-Url: https://github.com/AndyTheFactory/newspaper4k/sessions/80f97daa-07cd-4e9c-93c2-cf2e1e80e936 Co-authored-by: AndyTheFactory <863810+AndyTheFactory@users.noreply.github.com>
Contributor
Author
|
Just as a heads up, I was blocked by some firewall rules while working on your feedback. Expand below for details. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Same article discovered under variant URLs (
http://theatlantic.com/…vshttps://www.theatlantic.com/…) bypassed the existing exact-URL dedup and appeared multiple times inSource.articles.Related Issues
Proposed Changes:
Root cause:
_generate_articles()keyed dedup onarticle.urlverbatim, so scheme (http/https) andwww.subdomain differences produced distinct keys for identical articles.newspaper/source.py_normalize_url_for_dedup(url)— new static method; strips scheme andwww.prefix, normalises trailing slash. Result is used only as a dict key, never for fetching._generate_articles()— adds a second dedup pass using the normalised key after existing exact-URL dedup. Skipped URLs are logged atDEBUG._get_article_fingerprint(article)— new static method; computessha256(title + text)as a hex digest. Before hashing, the content is normalised: tabs and non-breaking spaces are replaced with regular spaces, multiple spaces are collapsed, punctuation is removed, and the result is lowercased. This ensures minor formatting differences do not prevent duplicate detection.parse_articles()— adds a fingerprint-based dedup pass after parsing, catching duplicates that survive URL normalisation (e.g. redirect chains landing on the same content). Skipped URLs are logged atDEBUG.tests/unit/test_source.pytest_source_parse_articles: mockedparsenow sets uniquetitle/textper article (empty content was previously allowed to collide on the fingerprint).test_normalize_url_for_dedup,test_generate_articles_deduplicates_www_vs_no_www,test_generate_articles_deduplicates_http_vs_https,test_get_article_fingerprint,test_get_article_fingerprint_normalizes_whitespace_and_case,test_parse_articles_deduplicates_by_content.How did you test it?
All 15 unit tests in
tests/unit/test_source.pypass, including 6 new targeted tests covering URL-normalization dedup, content-fingerprint dedup, and fingerprint normalization (tabs, non-breaking spaces, multiple spaces, punctuation, and case differences).Notes for the reviewer
parse_articles()(i.e. after download + parse), so it does not affect users who only callgenerate_articles()._normalize_url_for_dedupkey is intentionally scheme-free — it is never passed to any HTTP client.is_valid_body()should already filter those out before the fingerprint pass.Checklist
fix:,feat:,build:,chore:,ci:,docs:,style:,refactor:,perf:,test:.Original prompt
⚡ Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast.