The reflection upon where I have got to with my own flipped classroom seems a sensible culmination of my coetail experience. From my first introducing the idea to my year 12 Physics class the IT integration has developed. I am now producing more and more considered vodcasts which reflect my own improved understanding of the tools and the student requirements. Furthermore the increased teaching time has meant the continued integration of further IT tools into the teaching and learning process. As my students have now got used to the flipped classroom techniques they are in a position to honestly reflect on their experiences to provide a data driven element to the discussion.
People fear close analysis of their student’s results because of concerns of what it could reveal about them as teachers. This fear can stop analysis which could provide a fascinating insight into student learning. For this reason it is great that one group of teachers have allowed me to analyse the results of common assessment tasks for a whole year group across a whole year of learning using IBO MYP Science. The questions I intend to provide data driven answers are:
1) Is one type of science more difficult to succeed in than another?
2) Are all assessment tasks of a certain criteria of equal difficulty?
3) In middle years general science classes does a higher level specialist get better results?
I ask the first question because in theory the criteria referenced system used by the MYP for science is for five of the six criteria used related only to general science skills. So only has one of six criteria, Criteria C – Knowledge and understanding, would provide an opportunity for any divergence. An appreciation of this could help teachers provide alternative or extended teaching to reach certain aspects of understanding and re-address any imbalance.
The answer to the second question should help teachers identify which assessment tasks need to be reviewed to improve alignment.
I ask the final question because I want some data driven evidence, and not the reactionary gut instinct that too often drives change, as to if having specialist teaching their own units only is beneficial or if the rewards of a skills based generalist are greater.
Is one type of science more difficult to succeed in than another?
Hypothesis: Physics is the most difficult followed by Chemistry and then Biology based on numbers of students who select these units for Diploma studies.
Method: Compare the overall average level for all assessments with those gained just in each science
Standard deviation = (Max – Min/ Number of samples)
For all assessment tasks this value is ±0.07
So subject areas from easiest to mist difficult are Physics, Biology and finally Chemistry. These results stand surprisingly against popular belief and against the numbers reflected in IB Diploma selection. Both Physics and Biology stand outside the standard deviation of the average indicating changes need to be made.
Evaluation of the assessment tasks is obviously ongoing but it should be noted the assessment task with the lowest average value was related to biology and has already been dramatically modified to reflect the teacher’s opinions. This should bring up both the biology average and the overall average up. There is also continued work in developing the chemistry assessment requirements.
Are all assessment tasks of a certain criteria but in different units of equal difficulty?
Hypothesis: The collective teaching experience would identify assessment tasks which where inconsistent with the criteria requirements causing students to under or over perform in excess of one level.
Method: Compare the overall average level for all pupils of each criterion (and within that each subject) to ensure that all the assessment tasks lie within one level.
The maximum and minimum levels all lie within 0.5 of a level of the criteria average.
The agreement between assessment tasks within each criterion is within an acceptable range.
In middle years general science classes does a higher level specialist get better results?
Hypothesis: Criteria referenced method of assessment is transparent and skills based and therefore results will not be aligned with a teachers specialist area.
Method: Compare the change between the overall average of students for each teacher across all criterion and those of each science topic taught.
|Specialist||Overall Average||Stan. Deviation||Biology Units||Chemistry Units||Physics Units||Average Change|
Analysis: The only specialist that improved beyond their overall class averages standard deviation was the Physics specialist yet this was not the largest change for that subject further indicating a lack of alignment between specialist and class performance. The greatest change occurred in the chemistry units which had already been identified as the most difficult to succeed in – so this further confirms that point.
As predicted in the hypothesis the criteria referenced assessment system means that a specialist teacher is not shown to have a significant impact on the performance of students in that class. Considering the spoken opinions of less able students against Physics it is interesting that the class with the lowest initial average had the greatest boost whilst studying that subject.
- The six units studied in the year which can basically be identified as two of each science; chemistry, biology and physics. And by pairing these up ensure that all assessment tasks are reflected for each subject (considering per unit does not allow this comparison)
- Each of the 3 teachers had a defined specialism and this was reflected in the units they organised
- The units were not all taught in parallel so randomizing, and so eliminating, the effect of students learning from their experiences and so improving throughout the year
- As this sample is taken from a further through the International Baccalaureate Middle Years Programme it is hoped that the core skills will already have been developed