728 x 90

Hitachi’s professor of happiness isn’t worried about an AI dystopia | Data

Hitachi’s professor of happiness isn’t worried about an AI dystopia | Data

For 13 years, Hitachi Fellow Dr Kazuo Yano has been using data analytics principles to measure his own happiness. On his wrist since 2006 Dr Yano has worn a ‘happiness monitor’, designed to unearth insights about how unconscious behaviour and motion can quantify his own levels of happiness. The Hitachi veteran, who has more than

For 13 years, Hitachi Fellow Dr Kazuo Yano has been using data analytics principles to measure his own happiness. On his wrist since 2006 Dr Yano has worn a ‘happiness monitor’, designed to unearth insights about how unconscious behaviour and motion can quantify his own levels of happiness.

The Hitachi veteran, who has more than 350 patents to his name and pioneered the fields of social big data analysis as well as practical semiconductor research, has long been extolling the societal benefits of data and artificial intelligence.

iStock
iStock

Techworld met with Dr Yano this week at the MGM Grand’s Marquee ballroom during Hitachi Vantara’s Next conference in Las Vegas, and we asked him, pointedly, has data made him happier? The answer arrives unequivocally and in a heartbeat: “Yes … a lot.”

“I have been measuring my life, especially wrist motion, for the last 13 years of my life,” he said. “Almost all day, every day, I have analysed my behaviour and most of it is quite unconscious. I analyse my life every day, to make my life better, and productive, and happier, and especially life as something to make other people happier, that’s the most important thing.”

Anyone can apply these principles, he said, and Hitachi has even developed an app to provide little personal prompts with the intention of increasing happiness and decreasing unhappiness. It appears at first glance like a strange fusion of corporate surveillance and mindfulness programmes, but Yano insists that the academic research behind the idea is beneficial for both people and profits.

The real-world measurement of personal data with wearable computers emerged in the early decades of computing in the 1970s, but really took off in the late 2000s and early 2010s with the ‘quantified self’ movement, and we can see forms of it in mass-produced consumer electronics such as the Fitbit, or the health functions of any given smartwatch.

Psychologically driven prompts from the Hitachi app, which has been trialled within hundreds of companies to date, encourage employees to put the happiness of others first and foremost: “Just in the morning the app reminds you: what kind of challenge will you take today to make other people happy? Very tiny things,” explained Yano. “But directing your attention to this question will completely change your life, especially when you make it a daily habit, a lot of the data shows that.

“Hundreds of companies have collaborated with us to test our app, and whether that app will make employees happier and more productive … the results are quite successful.”

The purpose, he adds, is that making other people happy is “academically proven” and a “good thing for sustainable happiness: for the people that interact with you and also yourself.”

Yano talks about how the last 20 years has seen academic research on happiness and the wellbeing of people – called positive psychology – make significant progress. “Many academic research and results have been collected, and there is a strong correlation with sustainable happiness and health, physical health, mental health, and also profit for the company. I think a good purpose, and a good data combination, will make this planet happier,” he said.

Whether the public will respond well to such proposals is up for debate. Even casual news observers will be aware of the potential for data abuse and misuse that has made headlines. Businesses that install monitoring technologies, ostensibly for the benefit of employees, will be justifiably prone to provoking accusations of personal intrusion, no matter how management tries to dress it up in the language of productivity or even Feng Shui.

If indeed human psychology is malleable to these “tiny interventions” – what’s to say that the same wouldn’t be equally true for some malicious actor seeking to create negative feelings in their audience, as Facebook toyed with when it secretly ran psychological experiments on its users to influence their emotions?

Yano agrees that this could be achieved in theory, but added that there is “always some risk, even if you’re walking on the street and you could be killed by some bad guy… we can’t completely eliminate all the risks,” he said.

Ethics

Just how does one square recent software-led inventions like killer drones and the mass manipulation of online audiences with the unwavering positivity extolled by Dr Yano? Or are the concerns of institutes like Open AI and others overblown in his opinion? He insists that “positivity and worry is the same thing” and although care should be taken with the implementation of new technologies, Yano also stresses that handling data is “not new”.

“Science has been relying on data for the last 300-400 years, even more,” he said. “But in the last 100 years especially, we have heavily relied on discovering what is real: discovering from data is beyond our naive intuition.

“Science always made progress by the observation of new data: that’s the only way for us, humankind, to deny our selfish expectation or anticipation. Data is the only way to make us more rational.”

That the core of something is not new is not reason for dismissing concerns either, and Yano suggests looking at the outcomes of these algorithms. For instance, the auto-play function on YouTube that has led particularly naive people down some very dark rabbit holes. And it’s here again that he raises happiness as a guiding principle for using data responsibly.

“Good purpose is always related to the happiness of people, something which makes people unhappy is not good,” he said. “So the time spent on YouTube sometimes makes people unhappy.

“Sometimes one action, to make one person happier, will make another person unhappy, so we need to be careful. But anyway, that’s the complexity of the world.”

However, he is encouraged by the debate swirling around the ethics of big data usage and AI today, especially compared to a few years ago, where the conversation was often fixated on eschatological soothsaying, rather than the immediate realities of the here and now.

“My understanding is that the use of data allows us to be flexible, going beyond the conventional governance of management structures or principles,” he said.

“Governing social systems by the rule or by manual is too rigid for a flexible world, so we need data and we need machine learning to be flexible. Depending on the situation, we provide flexible actions. That, AI can do. But, conventional people who are relying on static rules – they will try to keep that, so some of the ethical discussions coming from those people are too conservative sometimes. That kind of balance needs to be treated carefully.”





Source link

Susan E. Lopez
ADMINISTRATOR
PROFILE

Posts Carousel

Leave a Comment

Your email address will not be published. Required fields are marked with *

Latest Posts

Top Authors

Most Commented

Featured Videos