![intel bleep software intel bleep software](http://ixbtlabs.com/articles/inteld815efvac97/cp1.gif)
I guess this is for gamers who cared enough to get Bleep and censor some total Ronald's racist comments in CS:GO, but still want some of the offensive talk to stick around. Maybe you only want SOME misogyny in your lobbies?
#Intel bleep software software#
So now that you know what the software does, let's just glance over at all the neat little customization sliders at the top of this post again. I just don't like the idea of people having control over what is ok to be said. Others are worried about censorship in general: I don't understand how getting a software to bleep out words is better than just shutting someone up completely. That's just one of the reactions I gathered from people here in Lubbock in response to Intel's new software, and I have to agree. I don't see what circumstances exist that make this a better system than just muting people. If the idea is to tone down the toxicity you experience while playing, I don't see how this helps. In the YouTube video Intel posted detailing Bleep, they tell stories of gamers around the world who have to deal with the inflammatory language a lot of people use online. However, this seems like a secondary solution to an almost unsolvable problem: Polygon says that it censors hate speech in real time by bleeping out offensive language. The companies should commit resources to develop machine learning algorithms to moderate the content and curb hate speech on such platforms.Bleep is an AI software that is meant to give you the option to censor voice chat while you're playing multiplayer games. Putting a credit scoring system based on online behaviour will be an excellent place to start. The gaming companies should take initiatives to encourage best practices to maintain the hygiene of the online gaming space. With this mass exodus to the online world, it’s essential to keep the space safe and civil. Esports and streaming platforms like Twitch saw exponential growth during the period. The pandemic has pushed a lot of people to turn to video games. China had to set a gaming curfew over health concerns. Online gaming has exploded in recent years. This will eventually help to analyse and identify potential hate speech. For example, Facebook has been working on a new model called XLM-R, a combination of both the XLM and Roberta model – that uses self-supervised training methods to achieve cutting-edge efficiency in text comprehension across multiple languages. A few tech giants are also working towards the same goal. Gaming companies and professionals worldwide have formed a group, Fair Play Alliance, to address the rise of online toxicity in the gaming world. Technology alone cannot entirely solve this menace. If the need arises, it goes to the moderation team of ToxMod to respond manually and protect the player’s privacy. The first few triaging gates run on the user’s device the rest run on the secured server of the platform. Depending on the classification, it provides the user with the options of blocking, muting or initiating a warning. The algorithm looks at the talk’s content and various characteristics like emotion, frequency, and prosody. Multiple algorithmic “gates” are used to classify non-toxic conversations.
![intel bleep software intel bleep software](https://i.imgur.com/MjhXSpS.jpg)
The platform uses machine learning models and triaging technology to keep out toxic words from the chat. Modulate, a voice technology startup, has developed a tool named ToxMod to censor problematic speech in real-time. Bleep’s interface is programmed with multiple sliders to allow users to screen hate speech under categories such as “Ableism and Body-Shaming,” “Racism and Xenophobia,” “LGBTQ+ Hate,” “Sexually Explicit Language”, and Swearing “Misogyny.” Spirit AI combined NLU and scanning millions of messages in milliseconds using powerful search tools to optimise the tool’s efficiency. Natural Language Understanding (NLU) is at the heart of this AI technology, allowing the creation of more sophisticated behavioural classifiers with multiple layers of contextual sensitivity. To that end, Intel used the feedback from online gamers to build Bleep. Online toxicity, he said, is forcing gamers to quit or is affecting their mental health.
#Intel bleep software Pc#
Roger Chandler, VP & GM, Client XPUs Product and Solutions at Intel, said as one of the leaders of PC gaming, it’s Intel’s responsibility to keep its online gaming space safe. The chip giant developed the AI tool in partnership with Spirit AI, a data science and AI engineering company. Recently, Intel announced Bleep – an AI tool to censor abusive or derogatory words in chats while playing games. More than two-thirds of online multiplayer players have been subjected to severe forms of abuse, such as physical threats, stalking, and persistent harassment. According to an ADL report, 81 percent of US adults - aged 18 to 45 - encountered harassment while playing online games in 2020, up from 74 percent the year before.