mirror of
https://github.com/ilri/csv-metadata-quality.git
synced 2025-05-10 15:16:01 +02:00
Add checks and unsafe fixes for mojibake
This detects whether text has likely been encoded in one encoding and decoded in another, perhaps multiple times. This often results in display of "mojibake" characters. For example, a file encoded in UTF-8 is opened as CP-1252 (Windows Latin codepage) in Microsoft Excel, and saved again as UTF-8. You will see strings like this in the resulting file: - CIAT Publicaçao - CIAT Publicación The correct version of these in UTF-8 would be: - CIAT Publicaçao - CIAT Publicación I use a code snippet from Martijn Pieters on StackOverflow to de- tect whether a string is "weird" as determined by the excellent "fixes text for you" (ftfy) Python library, then check if a weird string encodes as CP-1252 or not. If so, I can try to fix it. See: https://stackoverflow.com/questions/29071995/identify-garbage-unicode-string-using-python
This commit is contained in:
@ -11,6 +11,8 @@ from pycountry import languages
|
||||
from stdnum import isbn as stdnum_isbn
|
||||
from stdnum import issn as stdnum_issn
|
||||
|
||||
from csv_metadata_quality.util import is_mojibake
|
||||
|
||||
|
||||
def issn(field):
|
||||
"""Check if an ISSN is valid.
|
||||
@ -345,3 +347,22 @@ def duplicate_items(df):
|
||||
)
|
||||
else:
|
||||
items.append(item_title_type_date)
|
||||
|
||||
|
||||
def mojibake(field, field_name):
|
||||
"""Check for mojibake (text that was encoded in one encoding and decoded in
|
||||
in another, perhaps multiple times). See util.py.
|
||||
|
||||
Prints the string if it contains suspected mojibake.
|
||||
"""
|
||||
|
||||
# Skip fields with missing values
|
||||
if pd.isna(field):
|
||||
return
|
||||
|
||||
if is_mojibake(field):
|
||||
print(
|
||||
f"{Fore.YELLOW}Possible encoding issue ({field_name}): {Fore.RESET}{field}"
|
||||
)
|
||||
|
||||
return
|
||||
|
Reference in New Issue
Block a user