By Judi Fusco
Cyberlearning 2017 was an inspiring event in April. You can see a storify (a record of the tweets during the meeting) that documents many of the topics and technologies presented. In this post, I'm going to share a little about the 4 keynotes and give you the links so you can watch them.
The four keynotes kicked off with a future thinking one about virtual reality (VR) by Jeremy Bailenson. The VR discussed in this keynote isn't ready for the classroom yet, but we'll have new technologies soon that will be classroom ready. The keynote by Jeremy Bailenson describes his work and helps us think about what we need to investigate to understand about learning and VR. Cyberlearning researchers and teachers need to be thinking and planning now for the future. (We'll do a post soon about VR that is in the classroom.)
The second keynote by Mary Helen Immordino Yang focused on the link between emotions and learning and what we know from neuroscience. Most of the good teachers I know intuitively understand how important the emotional connection is in the learning process, but the keynote talk helps us understand reasons why emotion and cognition are so intertwined and has helped me think. I will share more in another post.
The third keynote talk by Eileen Scanlon was on the challenges of creating and sustaining a meaningful program of research. Eileen does research on Citizen Science; you can learn more about it in a CIRCL Primer on Citizen Science.
The final keynote, given by Karthik Ramani, discussed computational fabrication as a way to engage students and help them learn. He is also creating new technologies and interfaces to technologies. He describes his work and lab. His students showed off cardboard robots! In the photo on the right, one of the CIRCL Educators checks out the robots.
I highly recommend watching each of the four keynote videos at some point. Each keynote is one-half hour and if you watch, leave a comment and tell us what you think and if you see any implications for your practice. You can read reflections on the meeting by Jeremy Roschelle, one of the co-chairs of the conference.
By Judi Fusco
Our last post discussed embodied learning and Cyberlearning. Cyberlearning is many different things; on the CIRCL site, we have an overview of Cyberlearning. In this post, we’ll look at another example: a new Cyberlearning project developing technology that may be able to help support teachers and the collaborative learning process.
It can be difficult to understand what is happening during collaborative work in a classroom when there are multiple groups of students and just one teacher. In a previous post we discussed how it’s hard for an administrator to walk into a classroom and figure out what is happening when students are collaborating because it’s hard to walk up to a group and understand instantly what they are doing. It’s also hard for teachers because they can’t be in all of the groups at the same time. Of course, teachers wish they could be a fly on the wall in each group so that they could ensure that each group is staying on-task and learning, but that’s impossible. Or is it?
At the end of that previous post, I asked if cyberlearning researchers could help create tools to better understand collaboration. When I did that, I was kind of setting myself up to introduce you to a Cyberlearning researcher, Cynthia D’Angelo. She has a project that may lead to the creation of a new Cyberlearning tool to address the problem that it is impossible for a teacher to be in more than one place at a time. Watch this 2-minute video about Speech-Based Learning Analytics for Collaboration (SBLAC) and see what you think.
Cynthia’s research is still in early stages, but all the practitioners I’ve told about it find it interesting and want it for their classroom. Here’s a little more about the project:
In this project, work is being done to determine if technology that examines certain aspects of speech -- such as amount of overlapping speech or prosodic features (like pitch or energy) -- can give real-time insights about a group’s collaborative activities. If this could happen, and SBLAC went into classrooms, then teachers could get instant information about certain things occurring in group collaboration even when they weren’t present in that group.
The proposed technology would require a “box” of some sort to sit with each group to analyze the speech features of the group in real time. One research question in the project is, “Are non-content based speech features (such as amount of overlapping speech or vocal pitch) reliable indicators for predicting how well a group is collaborating?” Initial results suggest this is promising. (Note, this technology doesn’t analyze the content of the speech from the students, just features of the speech. Hopefully, this helps to preserve student privacy.)
It’s important to support groups during collaboration because sometimes groups aren’t effective or an individual student gets left behind. This work, while it is still in early stages, could potentially help teachers identify groups having problems during collaboration. A teacher would no longer have to guess how a group was working when s/he wasn’t present and could target the groups having difficulties to help them improve.
If you want to learn more about the project, watch Cynthia’s 3-minute video shared at the NSF 2016 Video showcase: Advancing STEM Learning for All. Or you can read the NSF award abstract. Stay tuned, as we’ll have more about this project from two teachers who are working with Cynthia on SBLAC this summer.
SBLAC really requires teachers and researchers to work together on this hard problem about collaboration as it tries to create new tools to help in the classroom. What do you think of the idea? What do you think is hard or important about collaboration? What kind of feedback would you want on the groups in your classroom. Could SBLAC help administrators understand collaboration? Going forward, we’ll talk more about collaboration and collaborative learning, so feel free to leave questions or comments about collaboration, too.
By Mary Patterson
If we consider the constructionist and constructivist pedagogical ideas of Seymour Papert and Jean Piaget, how is all this technology helping students construct meaning? And more importantly, how can technology help us do it better?
Learning scientists are partnering with technology experts and teachers to answer these questions. Current trends in Cyberlearning include research on games and virtual worlds, data visualization tools, collaborative learning environments, intelligent tutors, augmented reality and immersive environments, embodied multimodal learning, learning analytics,
adaptive learning and more.
For instance, PIs: Karl Ola Ahlqvist, Andrew Heckler, Rajiv Ramnath of Ohio State University are exploring the idea of using online map games to generate critical thinking and impact learning about a far-away place in a tool they call, GeoGames.
The Center for Innovative Research in Cyberlearning provides a peek into the future with projects highlighted on their page http://circlcenter.org/projects/
What are YOU curious about? What learning questions do YOU need answered that would give you better insight into how students learn? What technology do you WISH existed right now?
Imagine turning your classroom into a planetary system or a town above an aquifer. Researchers, Thomas Moher, Tanya Berger-Wolf, Leilah Lyons, Joel Brown, Brian Reiser, from the University of Illinois at Chicago in a project titled,” Using Technologies to Engage Learners in the Scientific Practices of Investigating Rich Behavioral and Ecological Questions,” use dynamic phenomena that are imagined to be “embedded” in the physical space of the classroom, made accessible through stationary or mobile “portals” (tablet and laptop computers, large displays, etc.) and provide continuous location-specific visualization of the phenomenon. Students collectively observe, manipulate, and chronicle the embedded phenomenon, and construct models to reflect their understandings.
In Massachusetts and Virginia, researchers, Charles Xie of the Concord Consortium and Jennifer Chiu from the University of Virginia are helping students see science concepts in action in the real world, by developing mixed-reality technologies that augment hands-on laboratory activities with sensor-driven computer simulations in a project called, Mixed Reality Labs: Integrating Sensors and Simulations to Improve Learning.
As teachers, we are often the receivers of technology systems and learning theories. Wouldn’t it be great to have a hand in the design of these things based on what we experience each day? Let’s start this conversation!
Teachers, what do YOU need from technology and learning sciences?
PLEASE SEND IN YOUR COMMENTS!