By analyzing the accuracy and precision of the findings, validity establishes reproducibility. When assessing validity and reliability, it is important to consider replication. According to Mackenzie et al. Replication is essential for scientific research, according to Mackenzie and al. The replication allows for the same research results to be obtained with different people in various circumstances. It is used to confirm the findings from the first survey and verify that the research has been replicated.
Froiland (2016) defines face validity as the ability of a test to fully explain the concept it’s supposed to evaluate. It was the perception of accuracy and utility in testing the response rate. To test face validity, individuals are asked to assess the validity by evaluating how they perceive it. The rater of face validity could use a Likert-scale to assess it (Mackenzie, et al. 2018). A face validity expert examines the questions on a questionnaire to determine if the test provides a useful indicator about the subject under examination. The type of validity is what determines whether each measurement scale matches any of its conceptual domains.
According to Mackenzie et al. According to Mackenzie and colleagues (2018) predictive validity refers to the degree that standardized tests can accurately reflect results of a standard test. Liang and colleagues (2018) show a correlation between average college grades points and results from college admission exams. These test bias studies are often explained by differential predictive validity tests. Predictive validity refers to the degree to which the score of a test correlates to the achievement that was predicted by the test. For example, middle schoolers take the SAT to predict college performance (Ortiz et. al., 2017). The correlation matrix that links the assessments and the desired behavior is used to determine predict validity (Liang, et al. 2018,). With increasing predictive validity, the correlation between the assessment criteria (and the desired behavior) becomes more strong.
Concurrent validity refers to the degree of agreement between two different evaluations. One is relatively recent while the other has been established and trusted (Leiker et. al., 2016). Take a class of nursing students and consider how they are assessed on their competency by taking two final exams. While the first exam is practical, the second is written. Students who score well on the writing exam can also do well in the second examination. This is called concurrent validity. Concurrent validity compares the findings of a new test to those of an old test (of the same kind) to see whether they are same (Froiland & Worrell, 2016). If they are similar, then the two tests can be considered to be concurrently valid.