Speaking walls


You may have heard the expression “if only walls had ears” referring to the secrets they would hear. At AISB in our classrooms although our walls do not have ears they can certainly speak and they speak the language of the middle years program and to all our students go on about teaching and learning.
In the corridors we see the celebrations of success with the rolls of honor, the learner profile, our mission statement and service opportunities. Yet it is when we get into classrooms you will really hear the walls yell out to our students about the units being taught, the command terms used, the key words needed and share what has been happening.






So whenever you visit please take a moment to listen to what our walls have to say.
Special thanks for the following teachers with their excellent wall displays: Mr. O’Brien from the Science department, Ms. Parnell in English Language and Literature department, Mr. Pryslak from the math department and Ms. Nedelcu in Language Acquisition department.

Should success and failure in MYP always be the same?


success or failureIn each subject in the MYP students gain a final grade dependent on their final criterion levels total (which is the sum of the allocated level in each of the criteria).  This value is matched against a set of grade boundaries to identify a final grade from 1 to 7.   Each of these grades has a related descriptor such as:

  • Grade 1 – Minimal achievement in terms of the objectives
  • Grade 7 – A consistent and thorough understanding of the required knowledge and skills, and the ability to apply them almost faultlessly in a wide variety of situations.  Consistent evidence of analysis, synthesis and evaluation is shown where appropriate.  The student consistently demonstrates originality and insight and always produces work of high quality.


At no point in the Holy Grail of the MYP – from principles into practice document, is failure mentioned.  Note that in the coordinators handbook schools can fail but seemingly students can.  Now, by the way, I am fine all with this.  I consider everyone on their own personal continuum of understanding, and appreciate that the MYP reflects this.


However, many schools’ (all the ones I have taught in at least) still feel that they still need to identify what it means to fail (although some schools may disguise this as a cause for concern).  In later years when these MYP grades are incorporated onto a university friendly transcript which certainly clearly identifies what it means to fail.   This is where the inconsistency comes in at my present school a grade 2 is considered a fail yet at my previous school failure was considered to be a level 3.


Something similar happens at the other end of the academic achievement spectrum as reflected by being placed on the honor roll.  Last school MYP success meant getting a 52 or higher and presently a 44 gets the same accolade.


If I just based my school evaluation on final student achievement grades my last school would be higher up the ladder.  Now I moved to this school partly because I was so excited by the growth potential and the ambition to move higher up the academic achievement ladder.   Yet this all does lead me to be asking some questions:

  • Should the standards a school set reflect the school or the aspirations of the school?
  • Would raising the standards help nudge the students in an upward achievement direction due to increased expectation or bring about an acceptance of failure?

Any thoughts on this topic would be greatly welcomed.

The issues of academic honesty and the class test


Some rights reserved by albertogp123


In theory a test can be an excellent assessment task and devoid of all academic issues.  We like to think it as a pure moment; a student walks into a class, answers the questions the best they can and then walks away.  Thus, leaving a true insight into the ability of the students’ ability to use the tools they have learnt.  Furthermore, a test can act as excellent piece of formative assessment if the students are given the opportunity for a supported review.  With these honest aspirations I want to consider some of issues relating to academic honesty which need to be considered for this to be a truly fair assessment tool.

Please note that for this blog post I am considering a test which a student might sit at the end of a unit.  I am purposely avoiding the issues related to nationalized testing – such as exam style bias, or manipulation by teachers due to a range of outside pressures.  I am also not advocating this as the only suitable assessment task – I think it is one of many assessment tools which I accept some students perform better with than others.

So with my caveat out of the way here is my initial list of issues that need to be considered by both the teacher and the school institution:


Description Solution
Institutional test history When used as a formative tool it means that a test gets given to the student community and this becomes a reference for future generations of students completing this unit. 1)      As a teacher we need to revisit the test we set each time and make adjustments

2)      Only use as a summative tool – which to me is an educational failing.

Different test times When timetabling causes tests to be held at different times it means that those students who take the test first can provide an insight to those taking the test later. 1)      Set the timetable with an opportunity to have at least one shared period (this might put another subject out once in a while but what goes around should come around for them also)

2)      Set a different test for every class – which to me would not help the teachers with their work-life balance

Oh no I have missed the test due to…. Student misses the test so completes it later hence benefitting from the shared insight of other students 1)      Set one catch-up test which has been suitable adjusted and informs all students that this is the only other opportunity to complete this assessment.

2)       There should be other chances to test students during the year

Crib sheets Secret additional notes brought into the test
  1. Set a test which is based on application of knowledge and not recollection e.g. give them all the formulas they need
  2. Consistent rules relating to students being caught cheating in this manner
The useful toilet break A well placed toilet break provides an opportunity for a student to use their phone to access a world of information School rule that you cannot go to the bathroom until you have finished the test, unless you are accompanied


I mentioned earlier this was my initial list because I am hoping this blog post will provide an opportunity for further issues and solutions to be voiced.  So please do add comments.

Protecting the scientific investigation from plagiarism


Why a scientific investigation is important

A scientific investigation should provide an opportunity for students to work the scientific method – the general guide to the development of scientific thinking which is used from elementary school experiments to published scientific papers.  This continuity provides students with the opportunity for students to recognise that every associated write-up provides the opportunity to put to use what they have learnt before and experiment with improvements to help their own development.


The right investigation

I genuinely believe in that last statement but feel that too often students are set tasks are not open ended enough for students to be empowered by individual ownership due to repetition within the class or, worse, around the world with those experimental standards (for example the classic resistance of the wire experiment).  One reason for this is that teachers are afraid to set tasks that they do not completely understand so to protect their position as the revered holder of all knowledge, rather than acknowledging their shared role as a life long learner.

My favourite recent scientific investigation for a year 11 class involved the research into factors that affects the function of a simple voltaic cell.  The features which make this a suitable task are:

1) The potential experiments are relatively easy to set up and tale measurements from

2) With several different types and many electrolyte options it becomes easy for every student in the class to have their own individual investigations

3) I could not predict all the results found so felt that I became an involved learner with my students.

4) Even when some of my more able students dug up the Nernst Equation it became obvious that the ranges of concentration were not specified so often led to some delightfully (more for me than the students) contradictory results.

4) Some investigations can lead to null results, where values do not change.  This provides a great reminder to students that sometimes things are not dependent (which is seemingly trained out of students by teachers with the insecurities earlier mentioned)

5) Null results also provide an excellent gateway into conversations about the precision and reliability of results so supporting the requirement leap into higher level science (in my case those set by the IB diploma)

The right student support

Students need to recognise that every scientific investigation write-up should build upon the last.  Especially, when there is access to a digital version in the class due to a one-to-one program, the previous investigation, including the teacher’s comments, makes a great starting point as it best highlights the students’ strengths and weaknesses.

For this to work a school should insure that the expectations for a scientific investigation are consistent across all teachers throughout a year group.  I know every teacher has their own quirks when it comes to a write-up so bringing a year group team together to discuss this is vital.  Furthermore these pieces should fit together as a natural progression from year to year, which requires a department to consider all of these collectively.  For example my own year 7 template introduces and scaffolds al l the required sections, whilst the year 11 support material is no longer a template and is more detailed but does represent the same core sections with language continuity.

Helping each student develop their own glossary of useful terms for each section of a scientific investigation write-up is also a useful tool.  For instance sentences which correctly describe different graphical trends could be built up in a data analysis section.

The right teacher engagement

Academic honesty issues can be avoided if the teacher is constantly aware of what the student has produced at each stage – which is briefly explained below.

1) Aim, hypothesis and variables

Students should be provided the equipment to familiarise themselves with the investigation and related the equipment to help spark a question from which to develop a one sentence aim.   A one paragraph hypothesis should explain what they expect to happen and why.  A table showing the type of variable (independent, dependent and fixed) and how they are changed measured or controlled.

2) Procedure and results table (this should be a hurdle for starting the investigation for students)

The procedure, which describes the scientific process, with clear instructional and numbered steps.  A results table that, would normally, include the independent variable in the first column and the dependent variable in the following columns depending on the number of repetitions mentioned in the method.

3) Results and refined method

The results should be gathered and the method refined to reflect what the student actually does.

4) Data analysis (often including a graph)

An analysis of the results – often a graph and an explanation of what trends have been identified.

5) Conclusion

A consideration of the quality of the data collected relating to the reliability and validity.  Before presenting a scientific explanation of the findings. The validity, of course, should be related to if an existing scientific explanation can be used to support the identified trends.

6) Evaluation

The identification of issues which arose during the investigation and the related improvements.

This is all made even easier when each stage is reviewed by the teacher using a tool such as for each stage to be turned in and this should highlight the similarities between earlier stages and the final piece of work.

The right student requirements

By producing an explicit timeline for the stages described above it helps students manage their time more effectively and models the teacher expectations – so trying to eliminate the ill-conceived student developed plan of completing a scientific investigation write-up in one, often late night and last minute, attempt.

At best a hypothesis should reflect the students understanding at a point in time which guide their predictions.  It does not have to be based on the latest related academic studies but a student’s articulation of prior knowledge and therefore for a teacher provides a tremendous insight.  By making this clear it alleviates a convoluted hypothesis which has been designed to best fit the conclusions scientific explanation.

It is expected that a conclusion should contain a scientific explanation and that should be based on what others have identified.  For this reason it is vital that students understand that they need to identify their sources using a school wide consistent method e.g. MLA format referencing.


The rules to protect scientific investigations from academic honesty issues:

Rule 1: The teacher must design a suitably open-ended investigation with enough variety for all students to have individual ownership. 

Rule 2: Every investigation should emphasize the embedded building blocks for which the student to improve their own skills, and grades.

Rule 3: Teacher must be aware of all stages of the investigation to make it easier to identify work which is not that of a student but also to provide support at the relevant time.

Rule 4: We are all standing on the shoulders of giants and identifying where the key scientific concepts located in the conclusion come is best done by including a bibliography (and if possible in-text referencing). 

Comparing a year group's MYP Science results responsibly


People fear close analysis of their student’s results because of concerns of what it could reveal about them as teachers.  This fear can stop analysis which could provide a fascinating insight into student learning.   For this reason it is great that one group of teachers have allowed me to analyse the results of common assessment tasks for a whole year group across a whole year of learning using IBO MYP Science. The questions I intend to provide data driven answers are:

1)      Is one type of science more difficult to succeed in than another?

2)      Are all assessment tasks of a certain criteria of equal difficulty?

3)      In middle years general science classes does a higher level specialist get better results?

I ask the first question because in theory the criteria referenced system used by the MYP for science is for five of the six criteria used related only to general science skills.  So only has one of six criteria, Criteria C – Knowledge and understanding, would provide an opportunity for any divergence.   An appreciation of this could help teachers provide alternative or extended teaching to reach certain aspects of understanding and re-address any imbalance.

The answer to the second question should help teachers identify which assessment tasks need to be reviewed to improve alignment.

I ask the final question because I want some data driven evidence, and not the reactionary gut instinct that too often drives change, as to if having specialist teaching their own units only is beneficial or if the rewards of a skills based generalist are greater.


Is one type of science more difficult to succeed in than another?

Hypothesis: Physics is the most difficult followed by Chemistry and then Biology based on numbers of students who select these units for Diploma studies.

Method: Compare the overall average level for all assessments with those gained just in each science


Sample Average Level
All 4.29
Biology 4.29
Chemistry 4.17
Physics 4.39


Standard deviation = (Max – Min/ Number of samples)

For all assessment tasks this value is ±0.07


So subject areas from easiest to mist difficult are Physics, Biology and finally Chemistry.  These results stand surprisingly against popular belief and against the numbers reflected in IB Diploma selection.  Both Physics and Biology stand outside the standard deviation of the average indicating changes need to be made.


Evaluation of the assessment tasks is obviously ongoing but it should be noted the assessment task with the lowest average value was related to biology and has already been dramatically modified to reflect the teacher’s opinions.  This should bring up both the biology average and the overall average up.   There is also continued work in developing the chemistry assessment requirements.


Are all assessment tasks of a certain criteria but in different units of equal difficulty?

Hypothesis: The collective teaching experience would identify assessment tasks which where inconsistent with the criteria requirements causing students to under or over perform in excess of one level.

Method: Compare the overall average level for all pupils of each criterion (and within that each subject) to ensure that all the assessment tasks lie within one level.


Criteria Overall Average Max Min Biology Chemistry Physics
A 4.08 4.29 3.87 4.17 3.86 4.11
B 4.19 4.56 3.47 4.31 3.88 4.64
C 3.95 4.21 3.55 3.55 4.13 3.86
D 4.04 4.22 3.88 3.93 3.88 4.17
E 3.85 4.22 3.22 3.59 3.86 4.11
F 5.40 5.69 5.13 5.61 5.30 5.29


The maximum and minimum levels all lie within 0.5 of a level of the criteria average.


The agreement between assessment tasks within each criterion is within an acceptable range.


In middle years general science classes does a higher level specialist get better results? 

Hypothesis: Criteria referenced method of assessment is transparent and skills based and therefore results will not be aligned with a teachers specialist area.

Method: Compare the change between the overall average of students for each teacher across all criterion and those of each science topic taught.


Specialist Overall Average Stan. Deviation Biology Units Chemistry Units Physics Units Average Change
Average Change Average Change Average Change
Biology 4.33 0.07 4.40 +0.07 4.17 -0.23 4.28 -0.05 -0.07
Chemistry 3.94 0.15 3.90 -0.04 3.73 -0.17 4.28 +0.36 +0.15
Physics 4.42 0.08 4.40 +0.02 4.33 -0.09 4.53 +0.11 0.04


Analysis: The only specialist that improved beyond their overall class averages standard deviation was the Physics specialist yet this was not the largest change for that subject further indicating a lack of alignment between specialist and class performance.  The greatest change occurred in the chemistry units which had already been identified as the most difficult to succeed in – so this further confirms that point.


As predicted in the hypothesis the criteria referenced assessment system means that a specialist teacher is not shown to have a significant impact on the performance of students in that class.  Considering the spoken opinions of less able students against Physics it is interesting that the class with the lowest initial average had the greatest boost whilst studying that subject.




Additional Notes:

  • The six units studied in the year which can basically be identified as two of each science; chemistry, biology and physics.  And by pairing these up ensure that all assessment tasks are reflected for each subject (considering per unit does not allow this comparison)
  • Each of the 3 teachers had a defined specialism and this was reflected in the units they organised
  • The units were not all taught in parallel so randomizing, and so eliminating, the effect of students learning from their experiences and so improving throughout the year
  • As this sample is taken from a further through the International Baccalaureate Middle Years Programme it is hoped that the core skills will already have been developed
Go to Top