Planning and acting in partially observable stochastic domains
about
When to stop managing or surveying cryptic threatened speciesMulti-Agent Patrolling under Uncertainty and ThreatsDecentralized sensor fusion for Ubiquitous Networking Robotics in Urban AreasThe COACH prompting system to assist older adults with dementia through handwashing: an efficacy studyFrom data to optimal decision making: a data-driven, probabilistic machine learning approach to decision support for patients with sepsisThe influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.Biological and artificial cognition: what can we learn about mechanisms by modelling physical cognition problems using artificial intelligence planning techniques?Dynamics of Weeds in the Soil Seed Bank: A Hidden Markov Model to Estimate Life History Traits from Standing Plant Time SeriesStructure learning in human sequential decision-makingEfficient use of information in adaptive management with an application to managing recreation near golden eagle nesting sites.Which states matter? An application of an intelligent discretization method to solve a continuous POMDP in conservation biology.Minimum time search in uncertain dynamic domains with complex sensorial platformsReward optimization in the primate brain: a probabilistic model of decision making under uncertainty.Sampling-based real-time motion planning under state uncertainty for autonomous micro-aerial vehicles in GPS-denied environments.Decision making under uncertainty: a quasimetric approach.Informing sequential clinical decision-making through reinforcement learning: an empirical study.Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social ExchangeProspective Optimization with Limited Resources.Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.Decoding the view expectation during learned maze navigation from human fronto-parietal network.Predicting explorative motor learning using decision-making and motor noiseToward Self-Referential Autonomous Learning of Object and Situation Models.Reinforcement learning, conditioning, and the brain: Successes and challenges.A computational framework for the study of confidence in humans and animals.The algorithmic anatomy of model-based evaluation.Self-Directed Learning: A Cognitive and Computational Perspective.Frequencies of decision making and monitoring in adaptive resource management.Reward-based training of recurrent neural networks for cognitive and value-based tasksActive inference and epistemic value.A systematic review and checklist presenting the main challenges for health economic modeling in personalized medicine: towards implementing patient-level models.Active inference and agency: optimal control without cost functions.An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.Representation and timing in theories of the dopamine system.Computational models of planning.Optimal Behavior is Easier to Learn than the Truth.Faster Teaching via POMDP Planning.Evolving autonomous learning in cognitive networks.Recommendation System for Adaptive Learning.Heuristic and optimal policy computations in the human brain during sequential decision-making.Physiological and behavioral signatures of reflective exploratory choice.
P2860
Q24647438-0CB321C6-0CCC-4B01-ACB8-034525AB5B8EQ27304451-C702CE9A-467B-4623-9461-AB76FB2A445AQ30471131-9A998686-F033-4343-9420-CEFBA9B869D0Q30851266-A6E07D9C-6DA9-459B-9EE4-25FCE8ADA48FQ30898889-19A325A4-3FF7-4FA5-8537-CFBECDA4037FQ33354050-8E23A16A-C7B3-4739-B554-44FAA3272414Q33354218-68BD70BB-A0E7-49FB-9882-D791BC987C2AQ33457659-1AFA238E-9FD7-4DA0-89C1-3E9AFBE7605AQ33769581-FE0D6AD0-25FF-4984-9315-20BE3BACD6D0Q34010729-C71F2A06-BE1E-4DD6-A78E-7DD497F19DAAQ34169799-C8B2CC68-A7F6-4EB7-89ED-B78F168BA4D5Q34262968-5E20FA06-D6C3-4329-A816-C653540DEE27Q34562173-2A91AFA4-1961-45F2-8B4A-DEF2CD57CCC1Q34787021-DB7FC0C8-E79B-48FE-889A-78B7F0BE180BQ35078067-6FAF498D-3739-46D9-9EA4-1CEC996EAE0CQ35127824-6EF6E18D-6051-4051-AFD3-58C11B7F50C9Q35656570-084BEE9C-670E-4687-9F9E-AFDF77B7D86AQ35772853-FD546D95-3724-4B57-A2F7-838319E6F16AQ35830718-F90CD898-0B36-4F98-8B71-BF51C46ED1A9Q36343853-1F6E5B95-90BF-4499-BB1D-C0C7375AF5CDQ36355111-ED0F5645-4B5A-4AEC-A2C8-D3BF82731C5CQ37168547-32A5A338-D50E-4D0C-A748-03E404AB0B60Q37629421-AEDD7972-7484-49E0-831C-D8314223033EQ38001665-E8A6549B-98FA-47CB-8581-FD39C47B36C8Q38255450-467FB115-ED6F-4CB7-8778-68C808851B3BQ38546078-E98B3301-D508-4C52-9AEA-49FDE13EAC91Q38628647-9270496A-CCAD-4A7A-9214-1FC011E47D4BQ38770819-7220A7F6-23D6-4FF3-BA2D-FE1C1DC98913Q39038045-33925773-344D-46D9-9AD0-53BF4F048A50Q39039319-C9D1F524-A2E5-43ED-B532-ABA53884098CQ39574261-4B0EF064-3769-4AE3-BFC3-ADA955888455Q40182449-D1F2BAE3-AC0A-4D73-864C-66A3D4AEFFAEQ40317703-EA4FBBFC-437B-40C8-A53F-1E43A3AF0A12Q40610687-062FAED9-F6D8-4DD3-B753-CEB31BE4C172Q41060311-BB15B54C-A7B5-4959-B84E-323970BA438CQ43731036-D44F8141-16DC-4642-8651-14DE3991EF5DQ45943353-FC2AE026-5897-4B4B-BBDD-49D46FD8475FQ47563114-95C61B11-2BC0-41E4-B662-8CD90379721AQ47715922-9994E574-E719-41CD-B7AB-61A3F164AE49Q47786518-E7F5306F-B8CF-4CFD-814F-50E2256C34A4
P2860
Planning and acting in partially observable stochastic domains
description
im Mai 1998 veröffentlichter wissenschaftlicher Artikel
@de
wetenschappelijk artikel
@nl
наукова стаття, опублікована в травні 1998
@uk
name
Planning and acting in partially observable stochastic domains
@en
Planning and acting in partially observable stochastic domains
@nl
type
label
Planning and acting in partially observable stochastic domains
@en
Planning and acting in partially observable stochastic domains
@nl
prefLabel
Planning and acting in partially observable stochastic domains
@en
Planning and acting in partially observable stochastic domains
@nl
P1476
Planning and acting in partially observable stochastic domains
@en
P2093
Anthony R. Cassandra
P304
P356
10.1016/S0004-3702(98)00023-X
P407
P577
1998-05-01T00:00:00Z