Summary Microsoft just dropped a bunch of updates to their AI assistant Copilot, and honestly, most of it is the same old tech wrapped in shiny new marketing. But there are a few things worth caring about: it can now see what's on your screen, think a bit deeper (allegedly), and talk to you with a synthesized voice. Oh, and it's coming to WhatsApp because apparently, we needed another way to avoid real human interaction. The Highlights (aka Why You Might Actually Care) Screen-monitoring AI: Copilot can now see what you're looking at in Microsoft Edge. Intrusive? Maybe. Useful? Possibly. "Deeper Thinking": Microsoft claims their AI can now solve complex problems better. (Spoiler: It probably can't solve your existential crisis.) Voice Features: You can now talk to your AI and it'll talk back. Because typing was just too much work. Personalization: It'll remember your preferences, except in the EU where privacy laws actually exist. The Full Story (Without the Corporate Jargon) Let's be real here: Microsoft is in an AI arms race with Google, and they're throwing everything at the wall to see what sticks. Their latest move? Making their Copilot AI assistant more "warm" and "distinct" - whatever that means. Vision: Your AI Observer The headliner is Copilot Vision, which is basically Microsoft's way of saying "our AI can now see what you're looking at online." They're marketing it as some revolutionary feature, but Google's been doing this for ages on Android. The difference? Microsoft swears they're not storing…
Microsoft just dropped a bunch of updates to their AI assistant Copilot, and honestly, most of it is the same old tech wrapped in shiny new marketing. But there are a few things worth caring about: it can now see what’s on your screen, think a bit deeper (allegedly), and talk to you with a synthesized voice. Oh, and it’s coming to WhatsApp because apparently, we needed another way to avoid real human interaction.
Let’s be real here: Microsoft is in an AI arms race with Google, and they’re throwing everything at the wall to see what sticks. Their latest move? Making their Copilot AI assistant more “warm” and “distinct” – whatever that means.
The headliner is Copilot Vision, which is basically Microsoft’s way of saying “our AI can now see what you’re looking at online.” They’re marketing it as some revolutionary feature, but Google’s been doing this for ages on Android. The difference? Microsoft swears they’re not storing your data or using it to train their models. (For now.)
Here’s what makes it interesting, though: they’re actually limiting what websites it can work on. No inappropriate content (obviously), no paywalled content (The New York Times probably breathed a sigh of relief), and nothing they deem “sensitive.” What counts as sensitive? Microsoft won’t say, because of course they won’t.
Why should you care? Well, if you’re the type of person who’d rather not read a recipe or compare furniture options manually, this might actually be useful. Just don’t expect it to work on all your favorite websites – many publishers are telling AI to stay away from their content.
Microsoft claims they’ve given Copilot the ability to reason through complex problems better. They’re using some fancy “reasoning models” that take longer to respond but supposedly give better answers. Is it actually smarter? Who knows. But it’ll make you feel better about asking an AI to solve your problems instead of figuring things out yourself.
Now you can talk to Copilot and it’ll talk back with one of four synthetic voices. It can even pick up on your tone, which means it might be able to tell when you’re being sarcastic (but probably not). Just don’t talk too much – there’s a time limit on voice features, even if you’re paying for the premium version.
Copilot will start remembering your preferences and past interactions to become more tailored to you. It’s like having a digital assistant that knows all your quirks and habits. Comforting or concerning? You decide.
Here’s what it all means: Microsoft is pushing hard to make AI more integrated into our daily lives. Some of it might actually be useful, like having an AI that can understand what you’re looking at online and help you make sense of it. Other features feel more like solutions in search of problems.
The real question isn’t whether these features work – it’s whether we actually need them. Do we really need an AI to talk to us, to think “deeper” for us, to remember our preferences? Maybe. Or maybe we’re just finding more ways to outsource our thinking to algorithms.
Look, AI isn’t going away. It’s going to keep getting more sophisticated, more integrated, and probably more invasive. The best we can do is understand what these tools can and can’t do, use them when they actually add value, and remember that sometimes, thinking for yourself is still the best option.
And if all else fails, remember: the AI might be able to see your screen, but it still can’t see through your pretenses. That’s still a uniquely human skill.
Facebook’s parent company, Meta, has thrown its weight behind Elon Musk in a legal bid…
Google is introducing powerful new security features to its Pixel smartphones, aiming to strengthen defenses…
Summary OpenAI and Perplexity AI are emerging as formidable competitors to Google in the search…
As the internet's knowledge graph continues to expand at an unprecedented rate, traditional search methods…
Cybercriminals are getting creative, and a recently uncovered malware is no exception. Instead of using…
Google's decision to prioritize user privacy over cookie-based tracking marks a significant shift in the…