In this paper, we study the problem of one-shot skeleton-based actionrecognition, which poses unique challenges in learning transferablerepresentation from base classes to novel classes, particularly forfine-grained actions. Existing meta-learning frameworks typically rely on thebody-level representations in spatial dimension, which limits thegeneralisation to capture subtle visual differences in the fine-grained labelspace. To overcome the above limitation, we propose a part-aware prototypicalrepresentation for one-shot skeleton-based action recognition. Our methodcaptures skeleton motion patterns at two distinctive spatial levels, one forglobal contexts among all body joints, referred to as body level, and the otherattends to local spatial regions of body parts, referred to as the part level.We also devise a class-agnostic attention mechanism to highlight importantparts for each action class. Specifically, we develop a part-aware prototypicalgraph network consisting of three modules: a cascaded embedding module for ourdual-level modelling, an attention-based part fusion module to fuse parts andgenerate part-aware prototypes, and a matching module to perform classificationwith the part-aware representations. We demonstrate the effectiveness of ourmethod on two public skeleton-based action recognition datasets: NTU RGB+D 120and NW-UCLA.