Data Analysis and Responsible Sharing

Educator or District Leader analyzes data and responsibly and effectively communicates key findings to stakeholders.
Made by Digital Promise Marketplace and Research
This micro-credential is no longer available.

About this Micro-credential

Key Method

The educator or district leader analyzes collected data using appropriate method(s). The educator or district leader summarizes findings to share evidence-based recommendations with a broader audience.

Method Components

1. Clean and analyze the data.

Review the goal or research question of the pilot. Determine the data elements (i.e., metrics such as student attitudes, learning, or behaviors) necessary to achieve the success of the pilot. Clean (which involves the identification and removal of inaccurate or inconsistent data in a data set) and link the qualitative and quantitative data sources by individual student, teacher, and/or grade. Linking the data allows researchers to identify trends or, if a comparison group is present, effects that support evidence-based recommendations and conclusions for further use, purchase, and/or implementation scaling. Review results from the data analysis to identify key findings that can be used to answer the defined research question or determine whether the pilot goal to provide evidence-based recommendations was met.

2. Consider the audience.

Determine which audience needs to understand the findings of the analysis. There could be multiple audiences, for example, educators and district leaders. Consider the level of detail that is important for each potential audience. Does each group of stakeholders need to know the same information? What information about how the edtech tool was implemented will the audience need to interpret the findings? Would educators prefer to understand the curricular changes, whereas district leaders would be more concerned with cost and reach?

3. Ensure your findings and recommendations are supported by your data.

Review your summarized findings and recommendations to ensure your claims are supported by the data you collected, and that limitations are clearly stated (see Reporting Research Findings in the Resources section for help thinking through how to report various types of data and findings appropriately and accurately). For example, if you report that reading comprehension improved for students who used a particular edtech tool, be sure to clarify what other factors might have contributed to this outcome.

Research & Resources

Supporting Research

  • Jessica Sidler Folson, La’Tara Osborne-Lampkin, and Carolyn D. Herrington (2014). Using Administrative Data for Research: A Companion Guide to A Descriptive Analysis of the Principal Workforce in Florida Schools. Regional Educational Laboratory Southeast at Florida State University.
    This guide was developed based on research conducted in Florida schools to support the descriptive data analysis techniques described in this Micro-Credential.
  • Wilder Research (2009). Analyzing and Interpreting Data: Evaluation Resources from Wilder Research. Wilder Research.
    Wilder Research developed this resource, based on their research and expertise, to provide techniques for data analysis and interpretation. These techniques have been embedded throughout this Micro-Credential.
  • Strategic Data Project (2017). Toolkit for Effective Data Use. Center for Education Policy and Research at Harvard University.
    This toolkit, developed based on the research done by Harvard University's Strategic Data Project, guides educators and district leaders who collect and analyze education data.
  • Daniel T. Willingham (2012). When Can You Trust the Experts?: How to Tell Good Science from Bad in Education. Jossey-Bass Publications.
    This book delves into the human error that often biases the presentation of findings and provides techniques to overcome these challenges to support evidence-based decision-making.


Submission Requirements

Submission Guidelines & Evaluation Criteria

To earn the micro-credential, you must receive a passing evaluation for Parts 1 and 3. You must receive a yes in all parts submitted in Part 2.

Part 1. Overview Questions

(300-word limit)

Review your pilot goal or research question. What data elements are necessary to determine the success of the pilot or answer the research question? Please complete Part 1 before moving to Part 2.

Describe the data elements you will include in your analysis, making sure to describe:

  • The pilot goal or research question
  • The data collection instruments being used (i.e., the available data that has been collected)
  • The types of data (student pre-post survey results about attitudes toward technology; student pre-post assessment scores; educator pre-post survey results about student engagement; educator focus group results about district readiness to pilot; etc.) that directly allow you to answer the question at hand

Data Analysis

  • Is your data ready for analysis? Before you analyze your data, it is important to clean it. Cleaning data allows you to remove data points that blur valid trend lines. For example, if you have pre-post student assessment data in one sheet, do all of the students have both pre- and post-assessment scores? Do you have a large enough sample to remove the students who only have one score? Similarly, if you conducted student or educator surveys and included questions that had a multiple choice option of “I don’t know,” “N/A,” or a similar answer, clear the cells so the values do not interfere with your analysis.
  • Additionally, can you link your data, i.e., connect relevant data sets from various data collection instruments? Try to connect the relevant data sets from different sources (such as pre-post assessment data, pre-post student surveys, etc.) using a common identifier (often the student’s unique identification number or first and last name). Linked data allow you to make stronger claims because you are using the average student scores in pre- and post-tests and/or surveys to identify trends. If you are unable to link the data, be clear that the trend is based on the average pre score and the average post score across all participants, rather than the average change in score.

Part 2. Work Examples / Artifacts

To earn this micro-credential, please submit a document with the following data analysis and preparation to share the findings:

Submit the report or presentation you intend to share or have shared with colleagues or stakeholders who will make a purchasing, implementation, or scale-based decision. Include your final, evidence-based recommendation.

  • What do the data say? Is there a change from pre to post in assessment scores, learning attitudes or skills, or reported engagement, etc. for the students? Is there a change in student skills, learning attitudes, levels of engagement, etc. as reported by teachers? Do the trends differ when data are disaggregated by subgroups, such as gender, race/ethnicity, usage category, grade, school level, teacher, etc.? Are there themes or patterns consistently mentioned in qualitative data such as open-ended survey questions or interviews?
  • Based on the data, what should you do? Did students benefit from the use of the program or product? Which students? Should all students use it? When the data are disaggregated, did some students seem to have a negative experience with the program or product? Did usage cause a change in score or attitude? Did teachers have difficulty in implementation? Should there be additional teacher training or support? What happens if you do not continue its use? If you do continue use, should you scale it differently?

Part 3. Reflection

(200-word limit)

After you analyzed the data and created at least one visualization to communicate key findings, write a reflection that addresses the following:

  • Did this experience change the way you typically analyze data? If so, how?
  • Did this experience impact the way you intend to determine whether an edtech tool or program is being used successfully? Why or why not?
  • Were educators an audience you intended to share findings with? If so, did this change the way the district/educator leaders involved other educators?
  • What lessons can you take away from this PD experience that you will continue to use in future pilots and edtech procurement processes?
  • Are there ways you can use data analysis and communication strategies to negotiate licenses and prices with vendors?

    What happens when you apply Daniel Willingham’s “cheat” method for evaluating your evidence-based findings? This method pushes researchers to remove bias when reporting findings to better use evidence in making decisions.

  • Strip it: According to Willingham, the “strip it” method removes personal or emotional bias to better understand the results. He recommends constructing a sentence with the form, “If I do X, then there is a Y percent chance that Z will happen.” For example, “If 5th grade students use the program for an hour three times a week, there is a 34 percent chance that their ratio skill assessment scores will double.”
  • Flip it: Psychologists have repeatedly found that the description or presentation of information does affect the choices that are made. For example, Willingham compares the idea of being told the food you are eating is “75 percent fat free” versus “25 percent fat.” To avoid this bias when reporting your data, follow Willingham’s recommendation to “flip both.” When you report the data results, state, for example, “38 percent of the students who used this program have improved reading scores. In other words, 62 percent of the students who used this program did not have improved reading scores.”

Except where otherwise noted, this work is licensed under:
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)


Download to access the requirements and scoring guide for this micro-credential.
How to prepare for and earn this micro-credential - in a downloadable PDF document