Configuration Problem Definition and Technology Abstract. This chapter provides an overview of automated configuration problem solving, including an definition within the context of design and AI Planning, and special cases. This chapter provides an overview of the models used to represent the configuration problem and used for automated solving.
The most common models augmenting rules and catalogs are classes, constraints, goals with restrictions, and resources. We also briefly describe Truth Maintenance Systems as a model that may be used in conjunction with these. This chapter discusses the two primary techniques for the solving of configuration problems: Constraint Satisfaction and Generate and Test.
The following chapter considers specific systems in the context of the reasoning frameworks described in this chapter. This chapter describes the development of twenty 20 automated configuration solvers, including both academic systems and patented systems from industry, using the models and reasoning techniques of previous chapters. Advances in the state of the art are discussed, as well as various advantages and disadvantages of the different systems. This conclusion reviews the lessons learned; outstanding issues; the relevance of academic research to industrial development; recent work in automated configuration problem solving such as SAT and Logical Spreadsheets; and the relevance of this field of research to modern problems, such as web service composition.
Title Automated Configuration Problem Solving. Given that half of all participants in the experimental group never requested feedback, this outcome was not unexpected. Figure Average performance on each criterion, by condition and feedback use. It could be argued that students who did not request feedback when it was made available to them are less proficient students.
We thus found no evidence to suggest that there was a difference between students who could have asked for feedback but did not do so, and students who did not have the option to ask for feedback. The creation of hypotheses is a critical step in the inquiry cycle Zimmerman, , yet students of all ages experience difficulties creating informative hypotheses Mulder et al.
Automated scaffolds can help students create informative hypotheses, but their implementation in the regular curriculum is often cost-prohibitive, especially since they can typically only be used in one specific domain and language. This study set out to create a hypothesis scratchpad that can automatically evaluate and score hypotheses and provide students with immediate feedback.
We use a flexible Context-Free Grammar approach that can relatively easily be adapted and extended for other languages and domains. We described the development process of this tool over two pilot studies and evaluated its instructional effectiveness in a controlled experiment. Across three studies, we showed that a hypothesis parser based on a context-free-grammar is feasible, attaining moderate to almost perfect levels of agreement with human coders. The required complexity of the parser is directly linked to the syntactical complexity of the domain.
For example, the electrical circuits domain requires a more complex parser than the supply and demand domain. Further development of the context-free-grammar used in the parser will contribute to higher reliability and may extend it to other languages and domains. The second pilot study illustrated that a lack of familiarity of students with the online environment and the tools used can have a negative effect on their performance. Students were distracted by technical and process related issues, and had difficulty remaining on-task.
In the final experiment, we used a largely identical learning environment, but students were verbally introduced to each phase. These introductions allowed students to focus on the content of the learning environment, rather than on how to use the learning environment itself. Timmers et al. In fact, none of the background variables collected age, gender, physics grade and educational level were significantly related to feedback requests or the quality of hypotheses.
If the goal was to obtain as many hypotheses as possible and assess the performance of the parser alone, we would have been better off following the approach taken in the first pilot. In doing so, we can draw conclusions that are likely to be applicable to educational practice, rather than in laboratory conditions alone. In the first pilot, the number of feedback requests was significantly related to the quality of hypotheses. This result was confirmed in a controlled experiment, where students who requested feedback were significantly more likely to create syntactically valid hypotheses than those who did not.
The effects of feedback were immediate; hypotheses for which feedback was requested once where more likely to be correct. To the best of our knowledge, no other tool exists that can reliably score hypotheses, can easily be adapted to different domains, and that allows students to create free-text hypotheses.
The automated hypothesis scratchpad we present here can provide a clear and immediate benefit in science learning, provided students request feedback. By increasing the quality of students' hypotheses, we may assume that students are able to engage in more targeted inquiries, positively impacting their learning outcomes.
How students can best be encouraged to request and use feedback is an open problem, and out of scope for this project. The automated hypothesis scratchpad could also be adapted to be a monitoring tool, highlighting students that may have difficulties creating hypotheses, allowing teachers to intervene directly. The ability to reliably score hypotheses presents possibilities besides giving feedback.
For example, hypothesis scores could serve as an indicator of inquiry skill. As such, they can be part of student models in adaptive inquiry learning environments. Crucially, obtaining an estimate from students' inquiry products is less obtrusive than doing so with a pre-test, and likely to be more reliable than estimates obtained from students' inquiry processes. The aggregate hypothesis score computed for students did not have a known parametric distribution. This represents a serious limitation, as the score could not be used in statistical analyses.
As a result, we chose to only test statistical significance based on the syntax criterion. Investigating alternative modeling techniques to arrive at a statistically valid conclusion based on multiple interdependent criteria will be part of our future work. An automated hypothesis scratchpad providing students with immediate feedback on the quality of their hypotheses was implemented using context-free grammars. The automated scratchpad was shown to be effective; students who used its feedback function created better hypotheses than those who did not.
Patent Analysis and Mining for Business Intelligence : Bo Jin :
The use of context-free grammars makes it relatively straightforward to separate the basic syntax of hypotheses, language specific constructs, and domain specific implementations. This separation allows for the quick adaptation of the tool to new languages and domains, allowing configuration by teachers, and inclusion in a broad range of inquiry environments. All participating schools have obtained written and informed consent from students' parents to perform research activities that fall within the regular curriculum. Parents were not asked to give consent for this study specifically.
The experiments we performed were embedded in the students' curriculum, and the collected data was limited to learning processes and outcomes. Students were briefed that their activities in the online learning environment would be logged, and that this data would be used in anonymized form. Both the research protocol and consent procedures followed were approved by the ethical board of the faculty of Behavioural, Management and Social Sciences of the University of Twente ref KK and AL designed the intervention.
Table of Contents
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The randomizer forwarded the students browser to one of these conditions. Randomization was weighted to ensure a roughly equal distribution across conditions in each session. Aleven, V. Automated, unobtrusive, action-by-action assessment of self-regulation during learning with an intelligent tutoring system. Alfieri, L. Does discovery-based instruction enhance learning?
A meta-analysis. Anjewierden, A. Analysis of Hypotheses in Go-Lab. Bates, D. CrossRef Full Text. Belland, B. Synthesizing results from empirical research on computer-based scaffolding in STEM education: a meta-analysis.
- Vegetarian Cooking: Almond Vege Shrimps (Vegetarian Cooking - Vege Seafood Book 72)?
- And Straight on Till Morning: Essays on Autism Acceptance.
- Web Service Composition?
Bollen, L. Hypothesis Scratchpad. Brod, G. When generating a prediction boosts learning: the element of surprise. Burns, B. Goal specificity effects on hypothesis testing in problem solving. A 55, — Chomsky, N. Three models for the description of language. IRE Transac.
Theory 2, — D'Angelo, C. Google Scholar. Mayer Cambridge: Cambridge University Press , — Smart Learn. Durlach, P. Earley, J. An efficient context-free parsing algorithm.