L1ght Looks to Protect Internet Users from Toxic and Predatory Behavior

prevent cyberbullying hate speechprevent cyberbullying hate speech

Cybersecurity has been regarded as a necessity for all
computer users, especially today when data breaches and malware
attacks have become rampant.

However, one of the more overlooked aspects of cybersecurity is
the prevention of other forms of cybercrime, such as the spread of
harmful content and predatory behavior.

Most current discussions on cybersecurity revolve around
organizations needing to protect customer data or for individual
users to prevent their sensitive data from being intercepted.
However, given the prevalence of toxic behavior, it’s about time
the cybersecurity community also gives internet safety, especially
for children and younger users, its due attention.

Israel-based startup L1ght[1] aims to curb the spread
of bad behavior online. It uses artificial intelligence (AI) and
machine learning (ML) to detect harmful content, hate speech,
bullying, and other predatory behavior in social networks,
communication applications, and online video games.

The firm raised $15 million in seed funding to further develop
its platform. The funding round was led by Mangrove Capital
Partners, Tribeca Venture Partners, and Western Technology
Investment.

Just recently, the governments of Australia, Canada, New
Zealand, the UK, and the US, released eleven voluntary
principles
[2] that aim to guide
technology companies on how to mitigate and address child
exploitation issues and abuse on their platforms. The Technology
Coalition
[3], which is composed of
tech giants that include Google, Microsoft, and Facebook, has
backed these principles. The group commits to both promote
awareness and look for new technologies that can help address the
problem.

L1ght is uniquely poised to a major player in this particular
space.

The Spread of Online Toxicity

While the internet is now a critical part of everyone’s lives, it
has also become a major cause for concern for parents and guardians
around the world.

Many internet users spread hate speech and exhibit abusive
behavior to harm other users. More than half, or 59
percent
[4], of teens in the US have
experienced cyber bullying or online harassment. Victims of cyber
bullying, forms of behavior such as name-calling, spreading false
rumors, and physical threats, can suffer from negative
psychological effects.

They are more likely to
experience changes in their sleep patterns and to exhibit symptoms
of depression.
[5]

Aside from receiving online threats and abuse, young internet users
can also find themselves in the hands of predators who target young
and vulnerable individuals. Using advanced technology and
techniques, these cybercriminals can expertly hide their abusive
behavior from parental control software and law enforcement.

Predators typically join legitimate platforms, chat rooms, and
forums to contact minors and establish false rapport. Unsuspecting
youth can be easily lured and
exploited
[6] by these malicious
adults. More than one in ten, or 11
percent
[7], of teens have
experienced unwanted online solicitations.

Using AI to Fight Harmful Content

Fortunately, solutions such as L1ght look to beef up content
filtering and moderation measures in various platforms and to use
AI to flag abusive behavior. Its integration with different
platforms can also help moderators, owners, developers, and
publishers stamp out bad behavior by removing harmful content and
even taking action against accounts and users.

L1ght also employs a multidisciplinary team of data scientists,
cybersecurity experts, psychologists, and anthropologists to build
sophisticated algorithms that can identify nuances in communication
to ensure accurate and timely detection in various content formats
such as text, images, and video.

imageimage
L1ght CEO & Co-Founder Zohar Levkovitz

Unlike conventional content filters that use blacklists for
forbidden words, L1ght’s algorithm can identify potential variants
and spellings of abusive words to effectively ban them. This means
that there’s no way for users to get around filters and use hateful
expressions in discussions online.

In addition, the technology is sophisticated enough to analyze
online exchanges to spot differences between different kinds of
conversations, such as friendly trash talk and verbal abuse.

The recently secured funding will be used to further develop
L1ght’s algorithms to better detect toxicity in real-time.

A Safer Internet for Kids

Through its platform, L1ght aims to eliminate toxic online behavior
and create a safe online environment that will allow young users to
enjoy the benefits of the internet without harm.

“My co-founder Ron Porat and I started L1ght because we saw the
dangers our children were facing online and felt not enough has
been done to address these problems,” L1ght co-founder and CEO
Zohar Levkovitz say.

“Our main priority has always been to keep kids safe, and this
investment is further proof that our technology can accomplish
that. I’m honored and grateful to have our mission at L1ght
validated through this seed round from true industry titans.”

Preventing bad behavior and deterring other forms of cybercrime
should be part of modern cybersecurity discussions, and solutions
like L1ght are contributing to a safer internet, especially for
kids.

References

  1. ^
    L1ght
    (l1ght.com)
  2. ^
    eleven voluntary principles
    (www.justice.gov)
  3. ^
    Technology Coalition
    (www.technologycoalition.org)
  4. ^
    59 percent
    (www.pewresearch.org)
  5. ^
    more likely
    (www.buffalo.edu)
  6. ^
    lured and exploited
    (www.thestar.com)
  7. ^
    11 percent
    (www.jahonline.org)

Read more

Leave a Reply