Download: Chinese Social Credit Law and Robot Dog Navigation

This is today’s issue of the magazine. The Download, our weekday newsletter that provides a daily dose of what’s happening in the tech world.

Here’s Why China’s New Social Credit Law Matters

It is easier to talk about what the Chinese social credit system is not than about what it is. Since 2014, when China announced its plans to build it, it has become one of the most misunderstood facts about China in Western discourse. Now that the new papers were published in mid-November, it was possible to correct the entry.

Most people outside of China assume that it will operate as a Black Mirror-style system based on technology that automatically evaluates every Chinese citizen based on what they did right and wrong. On the contrary, it is a combination of attempts to regulate the financial and credit industry, allow government agencies to exchange data with each other and promote state-sanctioned moral values ​​- no matter how vague this may sound.

Although it will take a long time to implement the system itself, China, having published the bill last week, is now closer than ever to defining what it will look like and how it will affect the lives of millions of citizens. Read the full story.

—Zeyi Yan

See how this robot dog climbs difficult terrain just by using the camera

News: When Ananye Agarwal took his dog for a walk up and down the steps in a local park near Carnegie Mellon University, the other dogs stopped. This is because Agarwal’s dog was a robot, and a special one at that. Unlike other robots that tend to rely heavily on an internal map to get around, his robot uses a built-in camera and uses computer vision and reinforcement learning to navigate difficult terrain.

Why is it important: While other attempts to use signals from cameras to control the robot’s movement have been limited to flat terrain, Agarwal and his fellow researchers have been able to get their robot to climb stairs, climb rocks and jump over chasms. They hope their work will help simplify the deployment of robots in the real world and greatly increase their mobility in the process. Read the full story.

— Melissa Heikkila

Trust large language models at your own risk

When Meta launched Galactica, an open source model of a large language, the company was hoping for a big PR win. Instead, all he got was a Twitter critique and a juicy blog post from one of his most vocal critics, culminating in the embarrassing decision to film the model’s public demo just three days later.

Galactica was intended to help scientists by summarizing academic papers and solving mathematical problems, among other things. But outsiders quickly spurred the model to do a “scientific study” on the benefits of homophobia, anti-Semitism, suicide, glass eating, being white or male, demonstrating not only the premature failure of the launch, but also the insufficiency of AI. efforts by researchers to make large language models more secure. Read the full story.

This story is taken from our weekly Algorithm newsletter, which gives you the inside story on all things AI. Subscribe to receive it in your mailbox every Monday.

Required Reading

I scoured the internet to find the most hilarious/important/scary/exciting tech stories to date.

1 Verified Anti-Vaccine Twitter Accounts Spread Health Misinformation
And perfectly demonstrating the problem with charging to check in the process. (The keeper)
+ Maybe Twitter didn’t help your career as much as you thought. (bloomberg $)
+ A deepfake of the founder of FTX is circulating on Twitter. (Motherboard)
+ Some of the liberal Twitter users are refusing to leave. (Atlantic Ocean $)
+ Apparently, the Twitter bloodbath has come to an end. (edge)
+ The potential collapse of Twitter could erase vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft completed a flyby of the moon.
Paving the way for the return of people to the moon. (Voice)

3 Amazon Warehouse Surveillance Algorithms Are Trained by Humans
Low-paid workers in India and Costa Rica watch thousands of hours of mind-numbing material. (edge)
+ The AI ​​data labeling industry is very exploitative. (MIT Technology Review)

4 How to understand climate change
Accepting hard facts is the first step to avoid the darkest end for the planet. (New Yorker $)
+ The richest countries in the world have agreed to pay for global warming. (Atlantic Ocean $)
+ These three charts show who is most to blame for climate change. (MIT Technology Review)

5 Apple Reveals Dubious Cybersecurity Startup Deals
He compiled a document illustrating the extent of Corellium’s relationship, including with the infamous NSO Group. (Wired $)
+ The hacker industry is on the threshold of an era. (MIT Technology Review)

6 The Crypto Industry Still Feels Insecure
Shares of the largest stock exchange fell to a record low. (Bloomberg $)
+ The UK wants to crack down on gamified trading apps. (FT $)

7. The criminal justice system is failing neurodivergent people.
Imitating an online troll resulted in an autistic man being sentenced to five and a half years in prison. (Economist $)

8. Your workplace can schedule your brain scan. 🧠
All in the name of making you a more efficient employee. (IEEE Spectrum)

9 Facebook Doesn’t Care If Your Account Has Been Hacked
A series of new account rescue solutions seem to have had little effect. (VP $)
+ In the UK, Meta’s parent company was sued for data collection. (Bloomberg $)
+ Independent artists are building the metaverse in their own way. (Motherboard)

10. Why training image-generating AI on generated images is a bad idea
“Polluted” images will only confuse them. (New scientist $)
+ The facial recognition software used by the US government is reportedly not working. (Motherboard)
+ The dark secret behind these cute AI animal images. (MIT Technology Review)

Quote of the Day

“It seems like they cared more before.”

Ken Higgins, an Amazon Prime member, is losing faith in the company after a series of disappointing delivery experiences. Wall Street Journal.

big story

What if you could diagnose diseases with a swab?

February 2019

On an unremarkable street in Oakland, California, Ridhi Tarial and Steven Gere are trying to change the way women take care of their health.

Their plan is to use blood from used swabs as a diagnostic tool. In this menstrual blood, they hope to find early markers for endometriosis and eventually a host of other diseases. The simplicity and ease of this method, if it works, will be a big step forward from current standards of treatment. Read the full story.

— Dina Evans

We can still have good things

A place for comfort, fun and distraction during these strange times. (Any ideas?Write meorwrite me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haringthe legacy is more visible than ever, 32 years after his death.
+ Even the ennobled world of assembling dinosaur skeletons is not immune to scandals.
+ pumpkins are a Thanksgiving staple, but this has not always been the case.
+ If I lived in a frozen wasteland, I would be the most grumpy cat in the world too much.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button