Data Collection Methods
Four data collection methods were chosen for my particular study. The purpose of my study was to determine if the use of word study activities in guided reading would increase overall reading achievement. In order to assess all components of word study and reading achievement, I chose to complete four separate pre- and post-tests.
The first pre- and post-test assessed my students understanding of phonics. For this assessment, students were presented with 26 letters, in a random order. Students were first asked to name the letter, and then they were asked to name the sound. Students were scored on each section (letters and sounds) separately, they also received an overall phonics score.
The second pre- and post-tests assessed my students understanding of phonological awareness. This test included three sections: word awareness, rhyming, and syllables. For word awareness, students were provided snap cubes. After modeling an example, students were asked to use the cubes to show how many words were in the given sentence that I read orally. The rhyming section included pictures that had been printed in color and laminated. Students were shown pictures of rhyming words and were first asked to identify if the words rhymed or didn’t rhyme, and then students were showed a set of three objects and were asked to identify the word that didn’t rhyme in the set. Lastly, students were asked to provide a word that rhymed with “bat.” Students were graded on each section (word awareness, rhyming, syllables), as well as an overall phonological awareness score.
The third pre- and post-tests assessed my students' understanding of phonemic awareness. This test included four sections: initial sounds, final sounds, blending, and segmenting. The initial and final sounds sections had visuals that had been printed in color and laminated. Students were presented with a picture, I provided the word, and students had to identify the initial or final sound. The blending section was completed orally; I provided the sounds of the word, such as /c/ /a/ /t/, and students were asked to blend them together and produce the word “cat.” Lastly, the segmenting section also included visuals. Each word had a corresponding picture that was printed in color, laminated, and cut into parts. For example, the word “cheese” was cut into three parts. Students were provided the three pieces and asked to identify each part of the word, corresponding to one piece of the picture (/ch/ for the first piece, /ee/ for the second piece, and /z/ for the last piece.) Students were graded on each section, as well as overall phonemic awareness score.
The last pre- and post-tests were completed through Fountas and Pinnell benchmarking to determine student reading levels. Assessment was completed based on the Fountas and Pinnell assessment guide, and utilized the Fountas and Pinnell texts and assessment forms. Students were benchmarked before and after implementation in order to show growth throughout the study.
The first pre- and post-test assessed my students understanding of phonics. For this assessment, students were presented with 26 letters, in a random order. Students were first asked to name the letter, and then they were asked to name the sound. Students were scored on each section (letters and sounds) separately, they also received an overall phonics score.
The second pre- and post-tests assessed my students understanding of phonological awareness. This test included three sections: word awareness, rhyming, and syllables. For word awareness, students were provided snap cubes. After modeling an example, students were asked to use the cubes to show how many words were in the given sentence that I read orally. The rhyming section included pictures that had been printed in color and laminated. Students were shown pictures of rhyming words and were first asked to identify if the words rhymed or didn’t rhyme, and then students were showed a set of three objects and were asked to identify the word that didn’t rhyme in the set. Lastly, students were asked to provide a word that rhymed with “bat.” Students were graded on each section (word awareness, rhyming, syllables), as well as an overall phonological awareness score.
The third pre- and post-tests assessed my students' understanding of phonemic awareness. This test included four sections: initial sounds, final sounds, blending, and segmenting. The initial and final sounds sections had visuals that had been printed in color and laminated. Students were presented with a picture, I provided the word, and students had to identify the initial or final sound. The blending section was completed orally; I provided the sounds of the word, such as /c/ /a/ /t/, and students were asked to blend them together and produce the word “cat.” Lastly, the segmenting section also included visuals. Each word had a corresponding picture that was printed in color, laminated, and cut into parts. For example, the word “cheese” was cut into three parts. Students were provided the three pieces and asked to identify each part of the word, corresponding to one piece of the picture (/ch/ for the first piece, /ee/ for the second piece, and /z/ for the last piece.) Students were graded on each section, as well as overall phonemic awareness score.
The last pre- and post-tests were completed through Fountas and Pinnell benchmarking to determine student reading levels. Assessment was completed based on the Fountas and Pinnell assessment guide, and utilized the Fountas and Pinnell texts and assessment forms. Students were benchmarked before and after implementation in order to show growth throughout the study.
|
The slideshow to the left displays the three word study assessments (phonics, phonological awareness, and phonemic awareness) as well as the visuals that accompanied the assessments. |
Reasoning for Data Collection Methods
I collected data on 23 of my 25 students. Each student had phonics score, phonemic awareness score, phonological awareness score, and reading level in both pre- and post-tests. These assessments were chosen for my class based on a few factors. First, I wanted assessments that would show data regarding each aspect of my research, word study and reading achievement. Second, the word study tests were chosen because they were short, easy to facilitate, and came with visuals for the students. These tests also provided practice problems for each section for students to familiarize themselves with the skill before being scored. The tests were given with verbal and visual prompts. By giving three separate tests, this also allowed for students to take breaks between tests if needed, instead of sitting through one long test that may have caused disengagement.
Fountas and Pinnell benchmarking was chosen as a way to collect data on overall reading achievement. Fountas and Pinnell benchmarking was an assessment that was used in my district. My district chose to use this assessment due to its effectiveness for differentiating guided reading instruction. My instructional coach and reading specialist presented an article by Fountas and Pinnell that described the assessment’s validity and reliability. My district chose to utilize this assessment for many reasons, including a high test-retest reliability coefficient of .97 (consistency of student scores across multiple tests) and a high convergent validity coefficient of .93-.94 between fiction and nonfiction texts (the test measures what it is supposed to measure).
Also, my instructional coach expressed to me that Fountas and Pinnell benchmarking assessments allow for educators to collect detailed information regarding a student’s reading accuracy, comprehension, fluency, and reasons for errors. By analyzing a student’s performance, educators can discover specific strengths and needs for each student. With all of this information, it can be said that Fountas and Pinnell assessments are endorsed by my district and were selected for their effectiveness in regards to targeted reading instruction.
Fountas and Pinnell benchmarking assessments were given individually, and the remaining word study assessments were also given individually. Some students completed all three word study tests in one sitting, while others took breaks or spread the assessments over two days in order to discourage guessing answers in order to complete the test quickly.
Using Assessments to Influence Instruction
After giving the initial pre-tests, I was able to take the data and make changes to my instruction in order to meet the needs of my students. After I benchmarked my entire class, I was able to restructure my four guided reading groups. After seeing that I had two groups who were performing well below grade level, I made changes to my guided reading schedule. Rather than seeing only one group daily, I changed to seeing two groups daily. This allowed me to see more students during the week and provide my “Pre-A” and “A” level readers with more intervention in hopes of increasing their reading level. When I decided to do this, I had to alter my guided reading schedule. Previously, I also had four guided reading groups, but I saw 2-3 groups per day, for 20 minutes each. When I revised the schedule, I changed the length of time each rotation lasted. Instead, I saw 3 groups daily, for 15 minutes each, with a 15 minute “empty” block that I could utilize for running records, or assisting students in Daily 5 rotations.
Next, I was able to meet with my reading specialist to determine who would receive Leveled Literacy Intervention (LLI) services. LLI is a small group intervention for struggling readers that is supplementary to guided reading instruction. I was able to use the data from their reading level as well as word study scores to choose 8 students that would benefit most from LLI intervention. These students received this intervention in addition to their guided reading groups.
I was also able to use the pre-test data to determine the specific lessons that I would be teaching each group. When I was designing my lesson calendar for implementation, I utilized the data to determine the specific skills on which each group would be focusing.
For example, my lowest readers focused on more basic word study skills such as initial, medial, and final sounds, while my highest readers focused on advanced kindergarten and first grade skills, such as digraphs and letter blends. By looking at reading level, phonics scores, phonemic scores, and phonological awareness scores, I created differentiated lessons that best met the needs of each specific group.
I also was able to informally assess students throughout the study. In order to provide the best learning experience for my students, I had to adjust instruction based on their needs. I found that some days, there were groups who had already mastered the word study focus for that day. When this happened, I had to adjust and decide what the new area of focus would be; either moving on to the next topic, or providing more advanced versions of the skill for that day. Other days, I found that some groups were not ready for the skill for that specific day. When this happened, I adjusted the lessons to either review previous material, or break down the new topic into easier chunks.
Also, I continued to restructure my guided reading groups based on student need. For example, I had a student who had performed at a “Pre-A” level during her benchmark test. I initially placed her in my “Pre-A” guided reading group. After a week, I realized that the skills we were covering in that group were far too easy for her. I came to realize that she was not actually reading at a “Pre-A” level, but that was how she had performed when assessed. When I realized this, I moved her to my “A” group. This happened with a few students. I had to consider their Fountas and Pinnell level, how they performed when assessed, and if that was accurate of their true reading ability. Throughout the first few weeks of the study, I rearranged groups a few times in order to find the correct “fit” for each student.