top of page

BACKGROUND

Regardless of their occupations, people need to handle and solve different types of problems every day. Problem solving is the process of finding a method to achieve a goal from an initial state. However, real-life problems are usually ill-structured without clear goals and givens, so cannot be solved in a routine manner. Knowing how to solve problems in real-life situations has become an essential skill for the 21st century (Griffin, McGaw, & Care 2012; Greiff et al. 2014). The assessment of problem solving skill has some fundamental differences with the traditional assessment of curriculum content knowledge. Problem solving assessment has to be able to successfully assess students’ abilities in dealing with ill-structured and dynamics environments. It requires the assessment environment should change upon students’ behaviors and responses. Traditional static and paper-based assessment clearly fails to do so. The existing problem solving assessments usually provide students the dynamics situations by implementing a series of simulations (Zhang, Yu, Li, & Wang 2017; Schweizer, Wüstenberg, & Greiff 2013). Then problem solving performance is evaluated in terms of students’ outputs in the simulation. Although students’ behaviors, also called as process data, are usually logged and analyzed as well, the analysis on the process data is still very limited. Aggregated measures like time, number of clicks are often used to profile problem solving procedures. Few studies identified simple problem solving strategies from the log files (Zhang et al. 2014; Greiff, Niepel, Scherer, & Martin 2016).

Besides individual problem solving assessment mentioned above, researchers started to emphasize collaborative problem solving assessment in the recent years. Collaboration is a long-standing practice in many environments, and people often needs to collaborate to solve real-life problems (Wilson, Gochyyev, & Scalise 2017). Because collaborative problem solving has to happen with at least two participants, a participant’s collaborative performance is highly affected by the collaborators. The assessment of the skill faces reliability issue. Some researchers solved the issue by creating simulated agents, also called as avatars, which solve problems together with an individual (Rosen 2017). Because the behaviors of a simulated agent are well scripted in advance, only the individual can affect the collaboration. On the other hand, some researchers carefully created joint activities for multiple individuals, and managed to assess collaborative skills by analyzing their collaborative behaviors (Wilson, Gochyyev, & Scalise 2017).

Several different techniques of learning analytics have been used to analyze both individual and collaborative problem solving. In general, two types of approaches have been adopted in the analysis: (1) Aggregate problem solving behaviors into some indicators, and explore the correspondence between the aggregated indicators and problem solving outputs. Correlation analysis, supervised learning algorithms such as decision tree and neural network are used in this approach. (2) Directly analyze problem solving behaviors without aggregation. Algorithms such as hidden markov model, lag sequential analysis, association rules mining are used in this approach.  Despite of the adopted analysis approaches, the problem solving actions in the log files have to be “reasonable” coded before feeding to the algorithms. The coding process can be treated as a kind of data pre-processing step in typical data mining. “Reasonable” here means that the coded behaviors should be neither too specific nor too general. For example, a coded behavior is too specific if it records the specific pixel where an action occurs; a coded behavior is too general if it only records the existence of an action. Finite state machine is often used to auto code problem solving behaviors at an appropriate level of abstraction. The learning theory aligned with the problem solving assessment should guide the designs of behavior coding schema. In this context, learning analytics can be seen as the method of transforming learning theory of problem solving into analysis results. Therefore, it is important to explore at the intersection of problem solving assessment and learning analytics.    

Note that the design of the user interface for an assessment actually decides what problem solving unit actions go to the log files. Apparently, it is impossible to acquire how a student solves a problem if the student is only required to fill up the final answer of the problem. Theories from learning science and cognitive science should drive the design to ensure that appropriate problem solving unit actions are recorded for the purpose of assessment. For example, the given of a problem is intended to be hidden after a series of interactions when knowledge acquiring skill needs to be assessed. The problem relevant documents are mixed with irrelevant documents in a virtual library if the skill of information identification needs to be assessed (Zhang, Yu, Li, & Wang 2017).

In summary, researchers in learning science and data analytics have to work together to develop high quality of problem solving assessment. The proposed workshop aims for inviting researchers who have interests in facilitating problem solving assessment with learning analytics from both of the two areas. The organizers will invite researchers who have previously conducted related studies to present their findings and lessons learned. Then all the workshop participants are invited to discuss together to learn from each other and explore any collaboration opportunities.

References

Greiff, S., Niepel, C., Scherer, R., & Martin, R. (2016). Understanding students' performance in a computer-based assessment of complex problem solving: An analysis of behavioral data from computer-generated log files. Computers in Human Behavior, 61, 36-46

Greiff, S., Wüstenberg, S., Csapó, B., Demetriou, A., Hautamäki, J., Graesser, A. C.,... Martin, R. (2014). Domain-general problem solving skills and education in the 21st century. Educational Research Review, 13, 74-83

Griffin, P., McGaw, B., & Care, E. (2012). Assessment and teaching of 21st century skills: Springer.

Rosen, Y. (2017). Assessing Students in Human‐to‐Agent Settings to Inform Collaborative Problem‐Solving Learning. Journal of Educational Measurement, 54(1), 36-53

Schweizer, F., Wüstenberg, S., & Greiff, S. (2013). Validity of the MicroDYN approach: Complex problem solving predicts school grades beyond working memory capacity. Learning & Individual Differences, 24(2), 42-52

Wilson, M., Gochyyev, P., & Scalise, K. (2017). Modeling data from collaborative Assessments: Learning in digital interactive social networks. Journal of Educational Measurement, 54(1), 85-102

Zhang, L., VanLehn, K., Girard, S., Burleson, W., Chavez-Echeagaray, M. E., Gonzalez-Sanchez, J.,... Hidalgo-Pontet, Y. (2014). Evaluation of a meta-tutor for constructing models of dynamic systems. Computers & Education, 75, 196-217

Zhang, L., Yu, S., Li, B., & Wang, J. (2017). Can Students Identify the Relevant Information to Solve a Problem? Journal of Educational Technology & Society, 20(4), 288-299

bottom of page