Seeing the same thing differently: mechanisms that contribute to assessor differences in directly-observed performance assessments.
about
Vertically integrated medical education and the readiness for practice of graduates.Supervised learning events in the foundation programme: a UK-wide narrative interview study."On the same page"? The effect of GP examiner feedback on differences in rating severity in clinical assessments: a pre/post intervention study.Exploration of a possible relationship between examiner stringency and personality factors in clinical assessments: a pilot study.Toward competency-based curriculum: Application of workplace-based assessment tools in the National Saudi Arabian Anesthesia Training Program.Exploring the role of first impressions in rater-based assessments.Realizing the promise and importance of performance-based assessment.The "zing factor"-how do faculty describe the best pediatrics residents?Seeing the 'black box' differently: assessor cognition from three research perspectives.On the Assessment of Paramedic Competence: A Narrative Review with Practice Implications.Rater cognition: review and integration of research findings.A BEME (Best Evidence in Medical Education) review of the use of workplace-based assessment in identifying and remediating underperformance among postgraduate medical trainees: BEME Guide No. 43.The effect of rater training on scoring performance and scale-specific expertise amongst occupational therapists participating in a multicentre study: a single-group pre-post-test study.Cracking the code: residents' interpretations of written assessment comments.Selecting and Simplifying: Rater Performance and Behavior When Considering Multiple Competencies.Expectations, observations, and the cognitive processes that bind them: expert assessment of examinee performance.How faculty members experience workplace-based assessment rater training: a qualitative study.Effects of a rater training on rating accuracy in a physical examination skills assessment.Guidelines: The do's, don'ts and don't knows of direct observation of clinical skills in medical education.How do trained raters take context factors into account when assessing GP trainee communication performance? An exploratory, qualitative study.Online examiner calibration across specialties.Emotions and assessment: considerations for rater-based judgements of entrustment.Describing student performance: a comparison among clinical preceptors across cultural contexts.Grades in formative workplace-based assessment: a study of what works for whom and why.Supervisor-trainee continuity and the quality of work-based assessments.Inter-rater variability as mutual disagreement: identifying raters' divergent points of view.Exploring examiner judgement of professional competence in rater based assessment.A contemporary approach to validity arguments: a practical guide to Kane's framework.Impact of rating demands on rater-based assessments of clinical competence.How we give personalised audio feedback after summative OSCEs.Relatively speaking: contrast effects influence assessors' scores and narrative feedback.Use of Key Performance Indicators to Improve Milestone Assessment in Semi-Annual Clinical Competency Committee Meetings.When to trust our learners? Clinical teachers' perceptions of decision variables in the entrustment process.Training for Failure: A Simulation Program for Emergency Medicine Residents to Improve Communication Skills in Service Recovery
P2860
Q30197195-D2A0F4AB-37A0-4112-B85B-6ADD2D49A703Q30594195-F5E401A9-1FCA-428A-A47A-43C96555950CQ33771358-A92196C1-156C-4496-8A1E-0D3F1160DFB7Q34968820-726D69E2-17E3-482A-B387-7EFC46446118Q37300157-381F78C0-82D6-4073-822D-6AC97470C4D0Q38093126-DD7CAFE4-80BA-403E-B588-0928A5ABE042Q38163619-C0BCBB2F-C061-4F7B-9CA1-66B218B9B33BQ38202107-7067E814-8D9C-49BC-A015-0D60B261E0EAQ38259397-1C45F10A-CC60-4871-93D6-B86053D7942FQ38653102-2A165559-1528-4019-BFEB-9048C915BAAFQ38806755-39E3FCEB-8179-42BF-86AB-D0DEBCA9D783Q38954122-AD382605-2256-4C8B-9BAD-0DDDDD0AFE91Q38972683-C1540631-26BC-45B2-82C7-71F8E9B881E3Q39017514-B553237F-002A-45E1-8E8A-DCD87194D4E6Q40086279-43EF95C4-3B82-4AAC-99B4-BAE7BF195E9CQ40259633-59B2342D-C2A9-4B7D-BD1D-A3FB9C761C7EQ40833099-8AB8DCB7-6A87-4723-AA7B-4DC87454A0EDQ41712685-09DC449C-BB3D-4FD4-A483-52F58ACA2EBDQ42371721-7B7D52A0-38C3-4A15-B5A4-9D8F37BF7386Q46012083-96C84688-9457-494E-A6E5-F08084797FD2Q47260278-544256D4-C990-4E1A-9A6F-4444BB42FEADQ47286168-6024C11C-89BE-4994-A3F0-29EE80B0C547Q47362645-99184338-F5CC-478D-8AC3-17B815A75D42Q47743145-AF4C5493-02BC-43C5-9B02-7AD21E7AA6D4Q48255483-844C49EA-42BE-4E5E-968C-B9BC0D3AB05FQ48344401-5395D837-E2FE-47B5-A272-D09714F81B00Q50238873-714313A7-64E7-44AF-A4A8-2566B0E47F56Q50586321-08D731BF-240D-4024-91EE-ED633FE0E23AQ50596786-8BCC78D8-1D5E-41B4-95AA-A09F54B81381Q50651337-E00FD6B2-66EC-400F-86C3-976BD7944C03Q51602101-C597FE25-89A4-468D-9A30-116D7FBF142BQ55287011-18A89ACE-6198-45AC-AE78-AC457ECD6224Q55407519-5F714DC9-FBA4-4DBE-AD2F-E94CCAC9462DQ58567825-6CF306A9-31D8-4258-9B86-0FF4A4D6E6F6
P2860
Seeing the same thing differently: mechanisms that contribute to assessor differences in directly-observed performance assessments.
description
2012 nî lūn-bûn
@nan
2012年の論文
@ja
2012年学术文章
@wuu
2012年学术文章
@zh-cn
2012年学术文章
@zh-hans
2012年学术文章
@zh-my
2012年学术文章
@zh-sg
2012年學術文章
@yue
2012年學術文章
@zh
2012年學術文章
@zh-hant
name
Seeing the same thing differen ...... erved performance assessments.
@en
Seeing the same thing differen ...... erved performance assessments.
@nl
type
label
Seeing the same thing differen ...... erved performance assessments.
@en
Seeing the same thing differen ...... erved performance assessments.
@nl
prefLabel
Seeing the same thing differen ...... erved performance assessments.
@en
Seeing the same thing differen ...... erved performance assessments.
@nl
P2093
P2860
P1476
Seeing the same thing differen ...... served performance assessments
@en
P2093
Karen Mann
Paul O'Neill
P2860
P2888
P304
P356
10.1007/S10459-012-9372-1
P50
P577
2012-05-12T00:00:00Z