the agent’s own intentions are represented in the reward structure of the model

I typed “autism” into Google News and found this amazing video:

First of all, Lisa Feldman Barrett has thoroughly debunked the myth of universal emotional facial expressions.

Notice how the robot guys aren’t even beginning to address anything like the office politics problems Barrett is talking about? Yeah, hearing differently and speaking at the wrong tone of voice can bring people down on you. Maybe I’m not supposed to speak for “lower functioning” people or whatever, but the robot guys are patronizing us. It’s like they got a cartoon understanding of autism from a book and then sat down and started coding.

Notice how Barrett’s solution to the problem is concerned, empathetic parenting that explains what’s going on to the children? They want to replace all that with an elaborate instrumental conditioning device, kinda like Harrow Harlow.

The project is funded by the UK’s Engineering and Physical Sciences Research Council for 42 months. In other words, it’s about enough for somebody’s PhD project. It’s the “SoCoRo” project, for “socially competent robot.” They have a website. Tellingly, the “Autism Reference” section of the site is just a message that “There are no articles in this category.” You can find a link to their academic publication about it, though. It was a presentation in Vienna (travel!), at a conference on “The Role of Intentions in Human-Robot Interaction.” I didn’t try to figure out how much money the grant was for, but presumably “a lot” from the perspective of a jobless autistic person. This is how they introduce the need for this work:

According to 2011 UK census figures, Autism Spectrum Disorder (ASD) affects 547,000 people over the age of 18 (1.3% of working age adults) [1]. These adults encounter serious difficulties in their everyday life, particularly in securing and maintaining employment. The unemployment rate among adults with ASD is higher than 85%, which is nearly double the unemployment rate of 48% for the wider disabled population and compares to the UK unemployment rate of 5.5%. One reason for this is that people with ASD struggle to correctly interpret social signals, i.e., the expressive behavioural cues through which people manifest what they feel or think (facial expressions, vocalisations, gestures, etc.). This leads to difficulties in correctly interpreting interactions with coworkers and supervisors.

This completely mischaracterizes what’s going on, which is often more like harassment and discrimination, which of course autistic people would stop if they had the power to. Now that #MeToo has made it officially obvious that HR doesn’t help with those sorts of issues, do you think HR is making sure all the relevant protections are being enforced? There’s also scientific data showing that normal people decide they don’t like us near-instantaneously, because we move weird in subtle ways. It’s not the content of what we’re saying. There’s a difference between “I’m aware of the structural and cultural problems hurting my life” and “I’m too socially retarded to understand if I’m doing a good job or not.” Treating people decently and fairly is actually easier than building robots. We know how.

Behavioural Skills Training (BST) [2] is recognized as one of the most effective training approaches for the effects of an ASD. BST is a behaviourist training approach involving phases of instructions, modelling, rehearsal, and feedback in order to teach a new skill [3]. It has been used to teach social skills to people both with and without disabilities [4]. However, BST is too labour-intensive to be widely applied. If robots could be used to help deliver BST, this could reduce the effort required by human trainers and lower the cost of BST application.

The logic is valid but the argument isn’t sound because the premises are false. Robots are what we need if you agree that the problem is a labor shortage in the delivery of “behavioural skills training.” There’s a missing link between the types of social skills taught by those programs and gains in the employability of autistic people. It’s possible that there’s research connecting those things, but it’s not presented or described here. Is the problem social impairment or social discrimination? Voting for robots is taking a position on that question.

While focusing on the particular case of BST for subjects with ASD, this research contributes to the long-term vision of social robots able to seamlessly integrate into our everyday life, opening the way to a multitude of domestic, educational and assistive applications. We argue that the development of successful long-lived human-robot relationships requires transparency of the robot’s motives, goals and plans so that its intentional stance is clear to human interaction partners.

This is where they express a longing for slavery. Slaves were considered to be low-tech robots. Still are.

Barney Gattie served on the jury in Georgia that sentenced Keith Tharpe to death for the murder of Jaquelin Freeman. Gattie is white; Tharpe is black, and so was Freeman. Seven years after the trial, Gattie stated in a sworn affidavit that he believed “there are two types of black people”—“black folks” and “niggers.” He declared that Tharpe was a “nigger” while the Freemans were “nice black folks.” Gattie added that “after studying the Bible, I have wondered if black people even have souls.” In light of this affidavit, Tharpe argued that his constitutional right to an impartial jury had been violated. A state court disagreed, as did a federal district court. In September, a federal appeals court also ruled against Tharpe, clearing the way for his execution—which the Supreme Court blocked in September, over the dissents of Justices Clarence Thomas, Samuel Alito, and Neil Gorsuch.

Never, ever let these people know what you’re really thinking:

The dream is to have a slave with no inner life. They’re keeping the dream alive, automating away the unpleasant task of helping autistic people with feelings.

This is part of the very same long-term vision. Sex slaves and people with helping jobs belong in the same category: black people, robots, whatever.

Soon, they’ll be telling us that these robots are the answer to the social retardation preventing us from being taken seriously as romantic partners! There will be much congratulating among the nonautistic people that are probably married for all I know.

This is what they’re really implementing:

We will be looking at employer-employee office-based scenarios, aiming to train high-functioning ASD individuals to decode communication signals from their employer. We will focus on broader groups of emotions such as approval (positive) and disapproval (negative) expressions [5]. We will gradually increase the dynamic component of expressions whereby a continuous internal state is reflected by the robot as opposed to the more commonly used discrete expressions [6]. The goal is that the resulting social signals are more “human-like”, posing the advantage of increased ecological validity [7] and hence enabling transfer of learning from robot to human incrementally in line with the Reduced Generalisation Theory [8].

While major components of this project concern the recognition of social signals [9] and the low-level production of expressive behavior by the robot, this paper concerns the decision-making and high-level behavioral policies that will determine the robot’s expressive responses to the human interaction partner during BST. Our approach to producing policies for the robot is based on the prior work of Broz et al., which used partially observable Markov decision processes (POMDPs) to model socially acceptable behavior for human-robot interaction [10]. The modelling approach taken in this work links a human partner’s observable behavior to the unobservable intentions motivating that behavior, allowing the robot to act based on beliefs about the partner’s current intention. The agent’s own intentions are represented in the reward structure of the model.

They’re trying to implement smoothly shifting moods with math.

 

They use words to make it sound like telling people the answer after they get a question wrong is a new idea.

One possible extension of this work for this new application area is to expose the agent’s reasoning process to the human partner after an  episode of interaction. This allows an autistic person to compare their interpretation of the intentions motivating the interaction to the agent’s  and to correct and learn from misunderstandings during roleplaying. We hope that by modulating the expressive behavior of a robot with a  simplified humanoid appearance, we will be able to create ecologically valid training scenarios that allow autistic individuals to repeatedly roleplay common workplace interactions and practice recognizing and interpreting expressive behavior in these contexts. The robot will facilitate this learning by sharing its beliefs about the state progression after an episode of interaction, revealing to the user why certain  expressive behaviors were selected. This review may be aided by video playback of the interaction itself in order to allow the user to review  the expressive behaviors displayed and focus on important details of the interaction.

They even propose to use real therapy sessions to program the robot’s practices regarding therapist self-disclosure.

We believe that this human-human roleplay-based approach will also be effective for modelling BST behavioral rehearsal. At a high level, observing human-human roleplay will aid in the understanding of how therapists use expressive behavior during training. Observation of entire episodes therapy will also allow us to understand how therapists give feedback about the link between behavior and the underlying intentions motivating them. Observation of this therapeutic interaction will motivate the design of how the robot should present information about its own internal state and decision-making process to the user after episodes of role-play.

Incremental progress in addressing autism: increasing the computational efficiency of the modeling algorithms!

The types of interactions that we need to model for this application are more complex than that modelled in Broz et al.’s prior work, which  was evaluated using a simple driving interaction. The range of possible expressive behavior is greater and the possible intentions (and the  influence of one partner’s intentions on the other’s behavior) are likely to be more complex. There are a number of ways in which we intend to address these challenges, both in terms of collecting human data and in terms of dealing with computational complexity.

Then there’s a final gem toward the end:

However, because autism is a spectrum disorder, there is the possibility of using undiagnosed and/or neurotypical members of the public (who are much easier to recruit) as representative of our target groups by classifying them according to where they fall on this spectrum. The tool that we propose to use to do this classification is the Autism-spectrum Quotient (AQ) [12]. The AQ has been shown to be an effective screening tool for ASD [13] which gives a score of 0-50 indicating the prevalence of autistic-type traits in an individual. Research has been conducted showing a degradation in performance in non-ASD individuals corresponding to AQ score for tasks in which ASD is associated with degraded performance (concept formation) [14]. We intend to assign roleplay roles to non-ASD experiment participants according to whether they are low-AQ (employer role) or high-AQ (employee role).

Ever consider that autistic people can also be supervisors? I’ve been one. One of the people I used to supervise told me I was good at it because I “let them get on with it.”

What about the autistic person without a job? It’s nice of society to keep them in mind in this fashion.

As far as society is concerned, there’s nothing pathological about the whole mindset behind this project. It’s, like, futuristic.

If we’re officially going to spend money on autism, can we spend it on more concretely beneficial things that respect autistic people? Is this helping autistic people, or is it helping a particular graduate engineering program? If we’re spending lots of money on a robot fetish, which we are, let’s call it that. The statement toward the beginning of their paper is profound: “The agent’s own intentions are represented in the reward structure of the model.”

I enjoyed robot stuff as a kid, but that’s not where I learned about people approving or disapproving of my behavior. Nor should it have been.

Close