Refine
Year of publication
- 2023 (2)
Document Type
- Article (1)
- Conference Proceeding (1)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Testing (2) (remove)
Institute
- Informatik (1)
- Medizin (1)
- Sportwissenschaften (1)
Background/Objectives: Agility and cognitive abilities are typically assessed separately by different motor and cognitive tests. While many agility tests lack a reactive decision-making component, cognitive assessments are still mainly based on computer-based or paper-pencil tests with low ecological validity. This study is the first to validate the novel SKILLCOURT technology as an integrated assessment tool for agility and cognitive-motor performance.
Methods: Thirty-two healthy adults performed agility (Star Run), reactive agility (Random Star Run) and cognitive-motor (executive function test, 1-back decision making) performance assessments on the SKILLCOURT. Cognitive-motor tests included lower limb responses in a standing position to increase the ecological validity when compared to computer-based tests. Test results were compared to established motor and agility tests (countermovement jump, 10 m linear sprint, T-agility tests) as well as computer-based cognitive assessments (choice-reaction, Go-NoGo, task switching, memory span). Correlation and multiple regression analyses quantified the relation between SKILLCOURT performance and motor and cognitive outcomes.
Results: Star Run and Random Star Run tests were best predicted by linear sprint (r = 0.68, p < 0.001) and T-agility performance (r = 0.77, p < 0.001), respectively. The executive function test performance was well explained by computer-based assessments on choice reaction speed and cognitive flexibility (r = 0.64, p < 0.001). The 1-back test on the SKILLCOURT revealed moderate but significant correlations with the computer-based assessments (r = 0.47, p = 0.007).
Conclusion: The results support the validity of the SKILLCOURT technology for agility and cognitive assessments in more ecologically valid cognitive-motor tasks. This technology provides a promising alternative to existing performance assessment tools.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).