Assisting Synchronous Chat in a MOOC Through Agent Facilitation

 

Overview

Assisting Synchronous Chat in a MOOC through Agent Facilitation is a learning activity and feature set designed to support a chat activity in a MOOC environment. Collaborative learning can be beneficial to learning abstract concepts, since learners can openly externalize their understanding of the concepts, and internalize the understanding of other learners to construct new knowledge on a topic.

Screen Shot 2015-07-12 at 7.28.21 PM

A schematic overview of the concepts covered in the lesson.

The goal of the lesson was to teach meteorological concepts to the users of the MOOC. To facilitate the learning of these concepts, we chose to support synchronous interaction among the users, which is not currently common in MOOC environments. The solution was to implement a conversational agent which guided a chat activity, however, supporting a chat activity in a MOOC environment involved overcoming a number of obstacles, including:

  • Time zone and scheduling conflicts
  • Motivation for the student to participate in a chat activity due to feeling lost
  • Frequent interruptions from the conversational agent

The design solution for overcoming these obstacles was a single, continuous chat activity in which students could enter and exit at any time they wished, and where the conversational agent would summarize what had already been discussed in the conversation up to the point of a user entering the chat.

Context

Course Project for Computer Supported Collaborative Learning during the spring semester of 2015 – Carnegie Mellon University, Pittsburgh, PA

 

My Team

  • Gaurav Tomar – Software Engineer
  • German Canale – Agent Discourse Content Designer
  • Arka Maini – Instructional Designer / Usability Tester
  • Timmy Burkhart – Instructional Designer

 

My Role

Instructional Designer / Technologist

  • Defined learning goals of the course, and designed corresponding assessment and instructional sequence of activities
  • Built instructional tools such as video lectures, quizzes, and content of dialogues for the conversational agent
  • Conducting prototype testing and incorporated feedback into design decisions

The Lesson

The sequence of the lesson included the following features:

  • A video lecture which introduced the meteorological concepts to be covered in the lesson. I designed the video lecture and animations using Google Slides and iMovie.
  • A chat activity which was supported with a conversational agent
  • A quiz in which the users could complete after covering all of the concepts in the chat activity

 

Prototype Testing

The idea of a single continuous chat room worked as no matching into pairs was required especially when there were several times during these studies that odd number of people were present in the room. A pairing approach would have left a person alone with the agent.

Agent was expected to be able to manage several people in a chat room through its facilitation moves but it was instead realized that beyond a certain threshold, it is difficult for agent to organize the interaction in the room.

Consolidation moves when successful did help new entrants to alleviate some of their confusion and stay in the room. It can be said in the light of some new entrants leaving the room during first pilot study when they were ignored by other participants who did not give them a summary. The pace of conversation was really fast during first pilot study due to large number of people in the room as well as unnecessary & frequent tutor moves for unanticipated responses by participants in the room.

The participants were expected to drive the whole discussion and poke agent infrequently for further discussion. But the participants relied heavily on agent for continuing the conversation by requesting all the topics to be discussed further with the agent and hence, KCDs initiated by the agent on such requests dominated the conversation or in other words, agent controlled most of the conversation by taking over the conversation too often in both the studies.

Lastly, APT moves were expected to be effective and structure conversation to reduce  goalless interactions between students but agent got really few opportunities to facilitate these moves again because of people not interacting with each other much but requesting discussions with the agent. Even when people did interact with each other, it was mostly for greetings and other off-task talk.  It showed that in an environment like this, it is also important to make people talk beyond off-topics to facilitate these APT move. In the second study, we tried to be human facilitators triggering opportunities so that users talk among each other and APT moves can be provided.

Over-reliance of users on agent can be due to the choice of user sample who saw this study as testing of the agent and so tried to play around with it, testing its limits.

One of the major issues, in first pilot study was that the agent was not handling enough unanticipated responses. For every off-topic response e.g. greeting, during a KCD, agent was requesting a retry to answer the question being asked, creating a lot of noise in the conversation. It was leading to many other issues like faster pace of the conversation, new entrants being ignored as other participants were not able to give a summary and people in general were not able to follow the conversation.

For second pilot study, we introduced a fix to listen for more unanticipated responses before asking for a clarification. Number of unanticipated responses to be ignored were based on the number of users in the room at a given moment, i.e., more the number of users in the room, more unanticipated responses will be ignored. This fix reduced the problem of agent interrupting too frequently due to unanticipated responses, drastically. The only time this problem was seen in the second pilot study was when a new person entered the room during KCD and it resulted in a longer sequence of off-topic dialogues among conversants due to greetings and consolidation moves.

A quick qualitative look into the data of the third pilot study indicates that in general participants were more involved in on-topic interactions compared to the two previous pilot studies. There are some points in the interactions in which participants do seem to be off-topic. However, on these occasions participants are discussing aspects about the procedure of the task, about the materials, or actually making connections between the visuals they see and their everyday lives. In this line, even if they are not achieving the main goal of the task, they do show that they are engaged in it and are therefore not necessarily off topic. For instance, in the next example, Leah relates the concept being discussed with their everyday experiences in Pittsburgh, establishing connections between the topic at hand and real life events to which the concept could be applied:

Screen Shot 2015-07-12 at 7.57.10 PM

A student anchoring the discussion to her personal life.

In the same line, breakdowns in communication due to the agent’s KCDs or due to participants just joining the chat conversation–which certainly were an issue in the first pilot and to a lesser degree in the second pilot- were certainly not a main problem in this third pilot due to shallower KCDs which allowed the control of the conversation to be shifted quickly to the chat participants. This is evidenced by the flow of the interaction, which was certainly better than the two previous pilots as well, and also on the fact that there was not so much talk about and around the agent, which was one of the problems mainly encountered in the first pilot. In this pilot, interactions kept a good flow of coherence even when participants just join the conversation and may make off-topic discursive moves with social purposes (greeting) or even task-related purposes (stating one cannot see the map the agent is showing).

Interactions initiated by the agent seem to be appropriately triggered by its moves and also by the inclusion of visual aids and task (i.e. the map). Unlike the first pilot, participants more frequently talked about themselves and also initiated interactions as well, which means the agent did not need to initiate all interactions and participants did not rely on the agent too much or too often, which could have been a potential problem.

Finally, the third pilot also shows that the limited dictionary did not seem to be a relevant problem, as was in the previous two. This makes sense if we consider that interactions were more on topic and participants did not feel lost during these interactions, so the organization of talk followed a more coherent and natural flow, which also helped the agent not overtake or misinterpret participants’ discursive moves and vice-versa. There were only some occasions on which participants overtly expressed their feeling that the agent’s move was not coherent or relevant for that specific moment in the conversation. However, even on these occasions participants were able to do what the agent was asking them to do, as the next example of asking for a summary when somebody joins in shows:

Screen Shot 2015-07-12 at 7.58.35 PM

The agent welcoming a newly entering student.

After a29 joins the conversation, the agent asks for somebody to summarize what has been discussed. Even though Cassie T evaluates this move in a negative manner (as not being appropriate for that specific point in the chat: ‘’we haven’t discussed much’’), other participants do summarize the discussion, which allows for the conversation to move on and also to acknowledge the presence of a new participant.

Comments are closed