1 / 48

Scientific Method, Lab Report Format and Graphing

Scientific Method, Lab Report Format and Graphing. Observation/asking questions. Scientists identify problem to solve by observing world around them. Ask Questions. Information collected from research, observations in attempt to answer questions. Forming Hypothesis and Making Predictions.

ccalhoun
Download Presentation

Scientific Method, Lab Report Format and Graphing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scientific Method, Lab Report Format and Graphing

  2. Observation/asking questions Scientists identify problem to solve by observing world around them

  3. Ask Questions Information collected from research, observations in attempt to answer questions

  4. Forming Hypothesis and Making Predictions • Hypothesis - statement that can be tested by observations or experimentation • Prediction - expected outcome of test • Based on hypothesis

  5. Setting up a controlled experiment • Use controlled experiment to test hypothesis • The manner in which a scientist conducts an experiment is called the experimental design. • A good experimental design ensures that the scientist is examining the contribution of a specific factor called the experimental (independent) variable to the observation. • The experimental variable is the factor being tested

  6. Record and Analyze Results • Record data • The data are the results of an experiment. • Should be observable and objective

  7. Analyze data • Data are analyzed using statistics. • Measures of variation • Statistical significance • Probability value (p) • Less than 5% is acceptable (p<0.05) • The lower the p value, the greater the confidence in the results • Not due to chance alone

  8. Draw Conclusions • Use data to evaluate hypothesis • The data are interpreted to determine whether the hypothesis is supported or not. • If prediction happens, hypothesis is supported.

  9. Publish Results • Allows others to use information, repeat experiments to confirm validity of results, review experimental design • Findings are reported in scientific journals. • Other scientists then attempt to duplicate or dismiss the published findings.

  10. Repeating Investigations • Experimental results should be able to be reproduced because nature behaves in a consistent manner

  11. Scientific Theory • Scientific Theory: • Concepts that join together two or more well-supported and related hypotheses • Supported by broad range of observations, experiments, and data • Scientific Principle / Law: • No serious challenges to validity

  12. Controlled Experiments • Involve a Control group and an Experimental (test) group • Experimental (test) group - group that receives the experimental treatment, all other conditions kept the same except for the single condition being tested which is called the independent variable

  13. Controlled Experiments • Control group - all conditions kept the same, receives no experimental treatment, is the experimental trial without the independent variable, used to compare experimental group to, helps determine if changes are caused by independent variable, or some other factor

  14. Variables Independent or manipulated variable - Dependent or responding variable - variable that is measured in an experiment, what happens because of the independent variable Controlled Variables (controls) - other factors that could cause changes in the dependent variable, so the scientist wants to keep them the same or constant, so they don’t cause changes

  15. Controlled Experiments • Experiment should be repeated (replicates) or use a large sample size to verify results • Test independent variable at different values if possible • Try to eliminate or control all other factors

  16. Introduction to Hypothesis Testing Often after making an observation, you might propose some sort of tentative explanation for the phenomenon; this could be called your working hypothesis.

  17. Because absolute proof is not possible, statistical hypothesis testing focuses on trying to reject a null hypothesis. A null hypothesis is a statement explaining that the underlying factor or variable is independent of the observed phenomenon—there is no causal relationship. For example, an appropriate null hypothesis might be that the light does not affect plant growth, that there will be no difference between the experimental and control group.

  18. The alternative to the null hypothesis might be that there is a difference between the two groups (control-no light and experimental-light). Usually (but not always), an investigator is trying to find an alternative to the null hypothesis—evidence that supports the alternative hypothesis by rejecting the null (based on statistical tests).

  19. Examples of the Null Hypothesis A researcher may postulate a hypothesis: H1: Tomato plants exhibit a higher rate of growth when planted in compost rather than in soil. And a null hypothesis: H0: Tomato plants do not exhibit a higher rate of growth when planted in compost rather than soil. It is important to carefully select the wording of the null, and ensure that it is as specific as possible. For example, the researcher might postulate a null hypothesis: H0: Tomato plants show no difference in growth rates when planted in compost rather than soil. -

  20. It is important to realize that hypothesis testing does not allow proof, or even acceptance, of the alternative to the null hypothesis. Typically, the decision comes down to whether there is enough evidence to reject the null hypothesis. If evidence to reject the null hypothesis is sufficient, what can be said is that the investigator rejects the null hypothesis—not that the investigation has proven the alternative hypothesis.

  21. Positive vs. Negative control

  22. For scientists, positive controls are very helpful because it allows us to be sure that our experimental set-up is working properly. For example, suppose we want to test how well a new drug works and we have designed a laboratory test to do this. We test the drug and it works, but has it worked as well as well as it should? The only way to be sure is to compare it to another drug (the positive control) which we know works well. The positive control drug is also useful because it tells us our experimental equipment is working properly. If the new drug doesn’t work, we can rule out a problem with our equipment by showing that the positive control drug works.

  23. The “negative-control” sets what we sometimes call the “baseline”. Suppose we are testing a new drug to kill bacteria (an antibiotic) and to do this we are going to count the number of bacteria that are still alive in a test tube after we add the drug. We could set up an experiment with three tubes. One tube could contain the drug we want to test. The last tube is our negative control – it contains a drug which we know has no effect on the bacteria. This tells us how many bacteria would be alive if we didn’t kill any of them. If the new drug is working, there should be fewer cells left alive in the first tube compared to the last tube and ideally then number of cells still alive (if any) should be the same in the first and second tube. So “controls” are important to scientists because it helps us validate the performance of our experimental set-up and tells us what effects we can reasonably expect to observe.

  24. Lab Report Format

  25. Before experiment • Purpose: What is the purpose of the experiment? Why are we doing the experiment? Background information, research needed to help understand or design experiment, reason leading to hypothesis (theory) • Materials:

  26. III. Procedure: Detailed step by step instructions of exactly what you plan to do. (Can someone else use your instructions to repeat experiment) • Include diagram of experimental setup • Specifically discuss variables • Independent – how it will be manipulated, differing levels/amounts/concentrations to be administered • Dependent – how it will be measured-tool or instrument to be used, units, frequency of measurements, if not a common method of collecting data, a picture or diagram illustrating how data is to be collected • Controlled variables specifically how they will be regulated/controlled if not already done • Safety precautions/equipment required

  27. IV. Data tables: Blank table to record data. Prepare before experiment. Think about what you will measure, how you will measure it, how long you will measure it, how frequently will you take measurements, and what instruments you will use to make the measurements? Units for data, uncertainties of data (15-20 measurements)

  28. During experiment Collect and record raw data (what you measured) accurately and neatly into organized data tables

  29. Data Collection and Processing - uncertainties • For most measuring devices, uncertainty is half the place value of the last measured value; ex. 25.5 ºC (± 0.5 ºC) • Rulers have an uncertainty of ±1 of the smallest division; ex. 3.1cm ( ± 0.1cm) • For electronic instruments the value is ±1 unit of the last decimal place; ex. 13.7 g (± 0.1g)

  30. Data Processing • Show and perform necessary calculations (calculate means, standard deviations, rates, standardize measurements (divide by volume or surface area to make equivalent) • Include units, significant figures

  31. After experiment • Graphs and Charts: graph data or place in charts to give visual representation of data. This will help to analyze data. Choose correct type of graph to show data, does graph show data the way that you want it to?

  32. Conclusion: Summarize results of experiment (what happened?). Analyze results (why it happened?) • Analyze data and draw conclusions from results based on reasonable interpretation of data, referring to data when possible • Explain/justify experimental results

  33. Evaluating Procedures and Results • Evaluate weaknesses and limitations of design of investigation and performance of your procedure • Is data reliable, or did these weaknesses and limitations impact your data • Small sample size, important variables not controlled, data not recorded accurately/reliably • Identify and discuss significant errors, and how they affected results

  34. Suggesting improvements • Suggest realistic improvements to identified weaknesses and limitations and should focus on specific pieces of equipment or techniques used

  35. Error Analysis • Human error • measurements taken inaccurately, inconsistently • Systematic errors • Affects data the same amount every time (equipment not calibrated, zeroed, worn, procedures incorrect, unreliable) • Sources usually identifiable, may be eliminated or reduced by changes to the experiment • Random error • Does not affect every measurement taken or affect them in the same manner (reading of apparatus) • The more trails done, the less of an effect a random error may have on results • May result from limits of accuracy of the apparatus, inconsistent recording, natural variations in samples

  36. Graphing Data

  37. GRAPHING • Title Graph - short but good descriptive title that clearly tells what the graph is about. • Identify the Variables • independent variable goes on X axis (horizontal) or TIME when the effect of the independent variable is measured over time (variable vs. control or different degrees of variable will be shown as different lines on graph

  38. Determine the Scale of the Graph - determine scale (numerical value for each square) to best fit the range of each variable. Spread the graph to use the MOST of the available space. • Number and Label Each Axis - tells what data the lines on graph represent. Include units.

  39. Plot the Points - • Draw the Graph - connect dots with lines on continuous data. Show approximate best fit line/curve if appropriate (most graphs of experimental data are not drawn as “connect the dots” • Label Lines or Use Legend - if graph shows more then one line/set of data, label line or make a legend/key. Use different marks/colors for different sets of data

  40. Types of Graphs • Pie Charts - • Line Graphs - Used for continuous data-data that is changing. Used to track changes over time or to measure the effect of one thing on another • Bar Graph - used to compare something between groups. Can be used to show large changes over time. Use legend to tell what each bar represents

  41. X-Y plot (Scatterplot) - used to determine if there is a relationships between things. Used when data points are not related/do not show changes over time/effects

  42. Histogram Creating this kind of graph requires setting up bins—uniform range intervals that cover the entire range of the data. Then the number of measurements that fit in each bin (range of units) are counted and graphed on a frequency diagram, or histogram. If enough measurements are made, the data can show an approximate normal distribution, or bell-shaped distribution, on a histogram. These constitute parametric data. The normal distribution is very common in biology and is a basis for the predictive power of statistical analysis.

  43. A normal distribution is a very important statistical data distribution pattern occurring in many natural phenomena, such as height, blood pressure, lengths of objects produced by machines, etc. Certain data, when graphed (data on the horizontal axis, amount of data on the vertical axis), creates a bell-shaped curve known as a normal curve, or normal distribution.

  44. Normal distributions are symmetrical with a single central peak at the mean (average) of the data. The shape of the curve is described as bell-shaped with the graph falling off evenly on either side of the mean. Fifty percent of the distribution lies to the left of the mean and fifty percent lies to the right of the mean.

  45. The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. When the examples are pretty tightly bunched together and the bell-shaped curve is steep, the standard deviation is small. When the examples are spread apart and the bell curve is relatively flat, that tells you, you have a relatively large standard deviation. • The Standard Deviation is a measure of how spread out numbers are, the average distance away from the mean

  46. One standard deviation away from the mean in either direction on the horizontal axis accounts for somewhere around 68 percent of the data. Two standard deviations away from the mean account for roughly 95 percent of the data. And three standard deviations account for about 99 percent of the data.

More Related