Kevin R. Cashen
Super Moderator
Let me continue with my line of thought concerning testing and results. I actually like these results because they make us stop and go “what the @#!$?” That is a good test! The best tests always produce 10 times more questions than they do answers. When I do a test, I am always disappointed in predictable results. Testing is for finding the unexpected or shortcomings that can be improved upon, tests that only confirm our claims and beliefs is called marketing , and the majority of the “tests” you read about in magazines is exactly that. So this test is good, it surprises folks and makes us reconsider our conclusions.
Now the next step, which can be even harder than the initial work is carefully and properly interpreting the results. Most of this is trouble shooting the data, which is easier with as few variables as possible, the bad news is that this exorcise involves too many to count. It is in the interpretation of the results that most misinformation in the bladesmithing field has been born. That is why it can be so touchy correcting that information without people feeling that you are questioning their honesty. Heck they saw the results with their own two eyes, are you calling them a liar? Unfortunately coming to a reasonable conclusion based upon the observed evidence can be completely sincere and honest, and still be totally wrong. This is why tests that shake up your preconceived notions are the best, they keep us looking at all possibilities without becoming complacent, or relying on assumptions.
In interpreting the results, assumption is the most dangerous pitfall, and it is why keeping our mind clear of predictions of outcome before interpreting the data can be very important. Does this sound patently obvious to folks? Well consider this- how many reading these results had second thoughts about the effectiveness of MacMaster Carr oil, but at the same time automatically assumed there had to be an error in the readings from water? After all, we all know that water is the fastest of those quenchants (I can assure you that P #50 is fast but it does not beat water), but none of these results must to be incorrect any more than they must be correct. The proper way to approach the interpretation would be as if we had never heard of, or knew anything about, any of the liquids used, including water, and then start deconstructing things by considering the properties involved.
If you think this is tough, I actually do heat treating consulting for some production companies, just try to work out over the telephone what could be going on when steel decides to defy the very laws of physics in a heat treating quirk. The only way out of that morass is to examine and record every approach, change just one variable at a time, re-examine the results for any effects and then move to the next variable. Often the answer will reside in a factor so far removed from the area on which you are focusing that you realize that everything within the sphere of existence of that steel must be considered since time didn’t start and stop just during the operation in question but continued to effect the outcome on every subsequent test.
Variables, variable, variables… so long as there are variables, easy answers are a pipe dream.
Now the next step, which can be even harder than the initial work is carefully and properly interpreting the results. Most of this is trouble shooting the data, which is easier with as few variables as possible, the bad news is that this exorcise involves too many to count. It is in the interpretation of the results that most misinformation in the bladesmithing field has been born. That is why it can be so touchy correcting that information without people feeling that you are questioning their honesty. Heck they saw the results with their own two eyes, are you calling them a liar? Unfortunately coming to a reasonable conclusion based upon the observed evidence can be completely sincere and honest, and still be totally wrong. This is why tests that shake up your preconceived notions are the best, they keep us looking at all possibilities without becoming complacent, or relying on assumptions.
In interpreting the results, assumption is the most dangerous pitfall, and it is why keeping our mind clear of predictions of outcome before interpreting the data can be very important. Does this sound patently obvious to folks? Well consider this- how many reading these results had second thoughts about the effectiveness of MacMaster Carr oil, but at the same time automatically assumed there had to be an error in the readings from water? After all, we all know that water is the fastest of those quenchants (I can assure you that P #50 is fast but it does not beat water), but none of these results must to be incorrect any more than they must be correct. The proper way to approach the interpretation would be as if we had never heard of, or knew anything about, any of the liquids used, including water, and then start deconstructing things by considering the properties involved.
If you think this is tough, I actually do heat treating consulting for some production companies, just try to work out over the telephone what could be going on when steel decides to defy the very laws of physics in a heat treating quirk. The only way out of that morass is to examine and record every approach, change just one variable at a time, re-examine the results for any effects and then move to the next variable. Often the answer will reside in a factor so far removed from the area on which you are focusing that you realize that everything within the sphere of existence of that steel must be considered since time didn’t start and stop just during the operation in question but continued to effect the outcome on every subsequent test.
Variables, variable, variables… so long as there are variables, easy answers are a pipe dream.