The need for valid, high quality feedback on teaching is more important than ever, as the COVID-19 pandemic continues to impact teaching and learning with no clear end in sight. The pandemic has not only likely decreased learning growth compared to pre-pandemic levels, it has likely exacerbated the educational achievement and opportunity gaps between vulnerable and disadvantaged students and their more privileged peers (Sonnemann & Goss, 2020.) It has also drastically shifted ways of teaching and learning, with limited understanding of how this has impacted quality of teaching, particularly through the eyes of students. Student perception surveys are a powerful and valuable tool for providing and promoting student-led feedback on quality teaching, but there is a need to revisit and challenge some of the underlying claims and assumptions of these surveys, to ensure their use is appropriate, relevant, and optimal in supporting quality teaching. This paper challenges a key assumption behind student perception surveys of teaching, that they identifiably measure multiple distinct and specific practices. Such surveys have become increasingly prominent in overseas teacher evaluation systems (Kane, Kerr & Pianta, 2014: Geiger & Amrein-Beardsley, 2019), but they also proliferate the Australian educational landscape, often embedded within larger, school-based surveys that examine students’ attitudes to school, wellbeing, and engagement. Student perception surveys are becoming a more common feature in schools seeking effective sources of teacher feedback, with localised development of products like the ACER Student Perception of Teaching Questionnaire and Pivot Student Perception Survey instrument, which, according to their website, has now been used in more than 75,000 classrooms, with over 1,000,000 surveys administered to date. While there is ample research indicating that student perception surveys are a valuable tool for measuring quality teaching (Cantrell & Kane, 2013; Ferguson & Danielson, 2015; Kuhfeld, 2017; Wallace, Kelcey, & Ruzek, 2016), there is also a growing body of research suggesting that these surveys are not actually able to distinguish the particular facets of teaching that they claim to (Kuhfeld, 2017; Wallace, Kelcey, & Ruzek, 2016). This paper presents findings from a study conducted on the implementation of the Tripod student perception survey across 63 teachers’ classrooms with 1,056 of their students. The Tripod survey (Ferguson & Danielson, 2015; LaFee, 2014) was widely utilised in the Measures of Effective Teaching Project (Kane & Staiger, 2012), is well-established and has been adapted to the Australian context (noting that the Tripod was the original basis for the design of the Pivot survey instrument, for instance). The paper will discuss how the Tripod's seven-factor model of quality teaching factors the survey purportedly measured could not be replicated, and instead a much simpler, two-factor emerged. The paper will argue that student perception surveys should not be used to isolate and provide feedback on teacher practices that are better measured through other means. It offers some potential explanations regarding what such student perception surveys actually measure, concluding by identifying and elaborating upon policy, practice and research implications for future use of student perception surveys, particularly in Australian schools.