NSF Awards: 1432606
Developing tools and analytics to automatically rate the quality of student collaboration using speech data.
NSF Awards: 1432606
Developing tools and analytics to automatically rate the quality of student collaboration using speech data.
Continue the discussion of this presentation on the Multiplex. Go to Multiplex
William Finzer
Wow, what an amazing idea! I love that you can gauge levels of collaboration from speech without having to understand the words. It makes sense!
The “entropy” graph is particularly interesting to me because of an entropy concept we’re using in a project. Can you say a bit about what it is measuring?
Cynthia D'Angelo
Senior Researcher
Thanks Bill! The entropy graph that I briefly show at the end is a bit technical in nature, but basically it is looking at the distribution of a given variable (e.g., the total duration of a students’ speech) across the group of three students. We were trying to produce a single value (the Shannon entropy) to describe the distribution, and tried entropy as that measure, but it seems to be turning out to only be useful for differentiating groups at the end of the spectrum and not in the middle (since the entropy value does not distinguish between different types of groups well there). We think we need two values, not one, to describe the distribution better and help distinguish the groups (which is still better than the original three values I suppose).
Miriam Gates
Researcher
This is a great idea — and will begin to address some of the struggles for teachers in group work. I’m curious to know about the items that you are using in the initial phase of data collection. Are these items based on existing curricula or were they written for this project? If they are based on existing curricula, how did you go about adapting them for this purpose?
Cynthia D'Angelo
Senior Researcher
The items we are using are from the Cornerstone math curriculum (based on SimCalc) that Jeremy Roschelle and other colleagues at SRI have been developing for over a decade now. We had to do a bit of adaptation, but not too much. Basically, we created an iPad app that allowed students to use remote controls to register their answers. (Part of the reason why was so we could log their actions better and part was so that we could try to get each student to be in charge of their answer – in other forms, like just using a keyboard, some students would easily just take over answering for another student.) So, the form of their interaction was changed, but the content of the items was not.
Thanks for your comment!
Miriam Gates
Researcher
I was interested to note that your items were designed for groups of 3. Was there a reason that you wanted 3 students to work together? Could adaption be done for groups of 2 or 4?
Cynthia D'Angelo
Senior Researcher
The items we were using were already designed for groups of 3, so that required the least amount of content adaptation. But, we did purposely want to avoid groups of 2 because that is a different dynamic in many cases. (We have some data on pairs using the system where they share the responsibility of the third response, but I don’t think we’ll have enough of that situation to make any strong claims about differences.) I think that groups of 4 would be more similar to the groups of 3 in how they distribute and spread out the intellectual work of the group, so it could probably be easily adapted to that size.
E Paul Goldenberg
Distinguished Scholar
The technical challenge is fascinating and the goal could be a real boon to classrooms. I realize this is early in the project, still too early even to say what will ultimately become possible (with current technology), but I have a question that might be answerable even at this early stage. As I understand the technique, you are hand coding features of collaboration that you care about, machine-detecting features of language (timing, prosody, etc.) and using machine learning to find relationships between the two that will (eventually) allow the machine to make reliable claims, useful to teachers, about the former based on the latter. Fully understanding that you cannot yet make such claims, and are not sure yet even what claims you will be able to make, what /kinds/ of claims are you imagining could become possible?
Cynthia D'Angelo
Senior Researcher
It is still early in the process, as we are currently working on full analysis of the first phase of data collection, but basically we are trying with this exploratory project to see whether or not there is any linkage or relationship between the machine-detectable features of language and the quality (and/or features) of collaboration. I’m hoping by the end of the project we can produce a usable prototype system that will be able to give a teacher some kind of sense of how students were collaborating in their small groups and whether or not they need to intervene. We might also, depending on the results, be able to give specific feedback about what a particular group could do to improve their collaboration (e.g., ask each other more questions or make sure all three people are contributing to the discussion). I think we will be able to say groups with more ‘good collaboration’ are characterized by [these] speech features and groups with less collaborative activity typically have [these] kinds of features.
Thanks for your question!
E Paul Goldenberg
Distinguished Scholar
Fascinating! I’m really eager to hear what you learn from this.
Cynthia D'Angelo
Senior Researcher
Hi everyone! Thanks for stopping and watching this video. I am really excited about sharing this project with you and hearing what you think about it.
I especially would love to hear from teachers and educators about how this could help you incorporate more collaboration in your classroom and what kind of feedback from a system like this you would want. When you tell your students to collaborate in class, what do you think that means to them?
Courtney Arthur
This is such a great application to gather useful data for teachers. I wonder what grade levels this has been used with and whether there is a way to gauge how/when answers are changed based on the conversations students have within their groups? It would be interesting to capture some of the reasoning behind their answers.
Cynthia D'Angelo
Senior Researcher
Right now we are using it with middle school students.
We are collecting the log data from the application so we know every time a student changes their answer, as well as when the student presses their button to submit an answer choice. Additionally, one of the things we are manually coding is whether or not they are giving an explanation about their answer choices to the fellow students. Early results seem to show that groups that have more explanations tend to have better collaborations.
Roger Taylor
Hello Cynthia, this is one of the most create educational technology project I’ve seen. I’m very much looking forward to hearing more about your project in the future.
Cynthia D'Angelo
Senior Researcher
Thanks so much!
Jacqueline Barber
I really enjoyed learning about this project, and look forward to hearing about your progress!
Cynthia D'Angelo
Senior Researcher
Thanks!
Susan Doubler
Cynthia, The possibility door is just opening for speech recognition and for video analytics. While you are working with speech, we are working with video and asking some similar questions —e.g., length of speech turn, number of turns at talk in an exchange. We’d love to be able to identify individual speakers, but discovered that children’s voices aren’t as differentiated as adults. For now, we are relying more on gesture, body position, and gaze. Imagine a future when we bring advances in speech recognition and video analytics together! Thanks
Cynthia D'Angelo
Senior Researcher
Yes, there is a big issue with being able to distinguish childrens’ voices. Even when we’re watching the video it is sometimes hard to tell who is talking. This is one of the reasons why we collected individual audio channels for each student. But we also collected a single audio stream of all three in each group, and so one thing we’ll be looking at is whether or not we can distinguish them and also whether or not we need to. In many cases, it seems possible that there are other features of speech, besides who is talking, that can give us enough information about the collaboration.
Thanks for your comment! Good luck with your project.
Further posting is closed as the showcase has ended.