Microsoft quietly released a small feature and suddenly it caused outrage

Should technology hide who you really are?

Not too long ago I saw two people I really like endure a violent argument.b

The object? Being awake.

This recently ubiquitous term has aroused so many emotions. When this happens, there is no longer any real definition of what it means.

Even if there never was.

That’s why I was moved by the sudden debate over a little-known Microsoft feature that’s been around for almost two years.

This is a feature of the latest versions of Word in Microsoft 365. In the Editor tool, this one underlines words and phrases deemed to be, well, “problematic”.

So many things in people’s lives are problematic. It would be wonderful if technology could solve more. But this feature seems a bit strange.

Step into rugged masculine vision.

Considering it’s been around since March 2020, you’d think someone on either side of the cultural divide would have already offered a loud “Hurray, Americah!” or snorted “Oh, my God, not that” before now.

Also: Microsoft says this is the ultimate truth about Windows 11. I still don’t get it

But it took the Daily mailof all publications, give off a little stench in the sense of Microsoft. The journal calmly explained how Redmond’s “awake filter captured words like humanity, blacklist, whitewash, mistress, and even maid.”

the To post even poked fun at the fact that former British Prime Minister Margaret Thatcher couldn’t be called “Mrs Thatcher”. No, she is now “Mrs. Thatcher”. I’m not sure she would have liked that.

And then there is “dancer” which is not inclusive, while “performance artist” is.

Naturally – or, some might say, fortunately – this problem solver is an opt-in selection. It does not correct automatically. He just whispers softly that you might be talking like someone not everyone will like. Or that only some people will like.

The feature also offers alternative suggestions on topics such as age bias, cultural bias, ethnic slurs, gender bias, and racial bias.

Microsoft says it does its due diligence when deciding which words and phrases need a prompt and which don’t. Which sounds like extremely difficult – and sometimes painfully subjective – work.

Several glances at the Mail’s comments section offered several tones of outrage.

Example one: “If you can’t control the thoughts, control the language. George Orwell, 1984.”

Example 2: “but don’t worry, you can turn it off—————-for now.”

And please don’t get me started on the social media reactions.

This all left me a little twisted. It’s bad enough to have to auto-correct, which still doesn’t always (often? never?) understand what you’re trying to say. Now you can have another potentially self-righteous thing looking over your shoulder?

And since when does Microsoft’s Grammar Police offer helpful suggestions? I tried it once. Never again. (That two-word sentence is probably bad grammar.)

But wait. Who uses this?

In the end, I asked myself only one thing: who really wants this?

Are these people who live in fear of offending?

Are these companies that live in fear of offending?

Is it the ones who really have no idea which words and phrases are acceptable in a commercial (or any other) sphere and which are not?

Are they people who are desperate to be politically correct – in a business sense, not just a political one – because they think their careers may depend on it?

Are they people at Microsoft?

In this case, this Microsoft feature should encompass many more words and phrases than just those that suggest an obvious lack of inclusivity.

The possibilities are profound, as well as problematic. I’m not sure even Microsoft knows what to make of all this.

It’s not like it makes it a fault – or even easy to locate. It is buried under “Grammer and Refinements”.

Software that wakes up. Is this a good thing?

But in the end, isn’t all this a bit sad?

Not because you don’t want everyone included — it’s definitely time for that to happen — but because you don’t want the tech to be blamed.

What is even more interesting are the special freedoms that this tool offers. For example, you can enable the gender bias checker while disabling the ethnic slurs checker.

One can imagine the user’s mental contortions: “How would I like to sound today? Like a sexist fanatic or a woke fanatic?”

What if you’re in a corporate setting and your IT admin can identify your settings?

They might mumble, “Aha, Henry sucks being racist, but he’s really strong on gender inclusion.”

Some might wonder if this feature allows people to be even more fake at work than they already are.

Many may find it more uplifting to know who you work with and have a clear idea of ​​who they really are.

If instead they use technology to mask aspects of their true selves, is that a little discouraging? Or is it just a part of modern life?

Technology has made it so much easier to fake so much. We are constantly on our toes, wondering what is real and what is not.

Would you rather know, when you see something someone has written, that it’s a reflection of their true voice and self?

Why rely on technology to Botox your lines?

Your writing is you, isn’t it?

That’s why you always start your emails with “Hello! Hope you’re doing well.”

Comments are closed.