Skip to content

frameworklori/LII-Framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

LII Risk Modeling System

This repository hosts the Linguistic Incendiary Index (LII) Risk Modeling System, a framework for assessing the social and ethical impact of language used in digital platforms.

LII is intended as a risk-visualization and research framework, not a censorship engine. Its purpose is to identify incendiary linguistic patterns, narrative escalation, and social harm risk while preserving transparency, accountability, and human review.

Origin and Repository Timestamp

Repository: frameworklori/LII-Framework
Repository creation date: 2025-05-17
First commit time: 2025-05-17 10:41:38 UTC
Original creator / idea originator: LORI-FRAMEWORK (beautysungirl@gmail.com)

See ORIGIN_STATEMENT.md.

Goals

  • Identify incendiary linguistic patterns in online discourse.
  • Provide non-censorship-based risk visualization.
  • Enhance content moderation with structural awareness, not control.
  • Support researchers and platforms in understanding narrative escalation risk.

Components

  • LII Score Calculator
  • Narrative Dynamics Heatmap
  • Integration API for platforms and researchers
  • Ethical review and human oversight layer

Boundary

LII should not be used as an automated censorship system, political suppression mechanism, or punitive scoring tool against individuals or communities.

License

MIT License. Contributions and ethical peer review welcome.

Part of the Lori Framework

About

A non-censoring API module for reflecting the social risk of language. Part of the Lori Framework.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors