By Mike Rankin
Getting old has its disadvantages. With each passing day there are aches and pains in new places resulting in more visits to the doctor than ever seemed possible or desired. Change becomes increasingly more difficult but necessary to keep with the times. Everything seems too expensive because it’s measured by prices of days gone by. People you remember as babies are having babies and some of those you remember as young adults are no longer with us.
Of course I’ve also found that growing old has its advantages. Perhaps at the top of the list is the ability to compare and analyze based on extensive personal experience. This of course sometimes leads to painstaking historical analysis on a topic at hand… appreciated by some, not so much by others. I cannot lend much historical perspective to topics centered on politics, the arts, or deer hunting, but I can speak extensively about alfalfa scissors-cutting. I know, it’s a meager lot in life, but somebody has to do it. For those with an interest, what follows are some of my observations and educated opinions about this right of spring that ranks up there with “pitchers and catchers report.”
In the beginning…
When I entered the extension scene in the late 1980’s forage quality testing had reached the point of being a known commodity. Near Infrared Reflectance Spectroscopy (NIRS) offered a quick, economical, and reasonably accurate assessment of forage quality. However, the idea of testing forage before, rather than after, harvest was a new concept made possible by NIRS. Further, corn silage had not reached the prominence in Wisconsin that it has today and so milk production was largely dictated by alfalfa forage quality. The single biggest problem of the day was simple… forage quality on many farms was not adequate to support the high milk production made possible by advancements in animal breeding. Two and three-cut alfalfa harvest systems were being replaced by four-cut systems. Along with this came the realization that spring alfalfa growth could be the best of the year if cut earlier and would be the worst of the year if cut late.
Educators, ag professionals, and producers jumped all over the information reported by local alfalfa scissors-cutting programs. At the time, there was no internet, so results were mailed, reported on an “Alfalfa Hotline” phone system, or sent out through the media. These efforts gained steam in the 1990’s and results were used by literally thousands of producers each year. The same is true today, but the internet has made it possible to both receive and disseminate the data in a more timely fashion. More importantly, scissors-cut programs got producers thinking about alfalfa in mid-May, usually before corn planting was done. Further, they have helped with the early identification of those years where alfalfa forage quality is unusually high or low because of abnormal temperature/moisture interactions.
The process…
Through the years I’ve taken scissors-cut samples on many beautiful spring mornings and many on not so beautiful mornings—-driving rain storms, cold winds, and once had to dig my samples out of an unexpected May snow. I’m not sure who coined the “scissors-cutting” term but it certainly has withstood the test of time. Likely, not many project coordinators even use scissors to sample alfalfa. It doesn’t matter. In recent years I’ve found a pocket knife works about as easy as any high priced cutting device. It would seem that the sampling and analysis of standing forage would be fairly straight forward; however, it became apparent early-on that there was plenty of opportunity for error as erratic results from one sampling date to the next occurred more often than anyone would have liked. Unexplainable results (e.g., forage quality getting better from one date until the next or dropping by inexplicable amounts) were often traced to factors such as:
1) sampling in different field areas,
2) sampling at different times of the day,
3) sampling at different plant heights,
4) lab workers splitting the whole plant sample resulting in a subsample that was not representative,
5) early NIRS equations that did not adequately measure components of fresh forage,
6) lab worker error, and
7) inherent error in the entire process of a small sample size representing an entire field.

Although errors still can and do exist from time to time, I’ve learned they can be minimized or overcome with some procedural adjustments (a fresh forage NIRS equation has also been developed). First, I stake-out a small (3′ x 3′) representative area that the cooperating farmer leaves uncut. All samples are taken from this area early in the morning. Further, I always cut and submit two samples from the area on each sampling date. This allows me to average the results and provides a backup if for some reason or another one of the results is erroneous. If both samples are telling me the same thing I have greater confidence in the results. I try to keep the samples relatively small to help insure the lab workers will cut and dry the entire sample. Finally, unless weather conditions are extremely good for alfalfa growth (unusually warm), I’ve found sampling more than once per week doesn’t do much good because unavoidable error is often greater than the forage quality change occurring over just two or three days.
Enter Albrecht et. al…
Perhaps the biggest change to the whole scissors-cutting effort came in the early to mid-1990’s with the introduction of a new system for estimating alfalfa forage quality. Dr. Ken Albrecht, University of Wisconsin Forage Researcher, and his students evaluated a number of plant based criteria to determine which ones correlated best as a forage quality predictor. Not surprisingly, plant height showed a strong relationship, and coupling it with plant maturity made it even a bit better. Albrecht released his equations describing the relationship and I quickly transferred them into a spreadsheet program to develop a usable table format that estimated relative feed value (RFV). The system, known as Predictive Equations for Alfalfa Quality (PEAQ), was widely accepted by ag professionals and received a tremendous amount of media attention. The user needed to randomly select several small areas within an alfalfa field and measure the tallest stem along with noting the most advanced stem in terms of maturity. Armed with this information, the user simply looked at the table to find the corresponding neutral detergent fiber (NDF) or RFV.

The PEAQ system was quick, easy, and there was no lab cost. For several years many extension agents reported both scissors-cut (fresh forage) analysis and PEAQ results. I recall getting numerous phone calls and having teleconference discussions centered on either which system was best or why the two approaches didn’t always give the same results. As for the latter, all of the potential errors for scissors-cutting listed earlier apply here, along with an additional list for estimating forage quality using PEAQ. Many extension agents adopted PEAQ as the only method used for reporting forage quality and still do to this day.
Through the years, I’ve gained some working knowledge using PEAQ as a method to estimate forage quality. First, it’s important to realize the values are based on a one-inch cutting height. Hence, cutting at a higher stubble height (as most producers do) results in a higher value than what is estimated by PEAQ. For this reason I don’t believe the PEAQ estimate for standing forage is going to be 20 to 30 RFV points higher than what is actually harvested as we initially predicted. My experience is that the two values will be pretty close unless harvest losses are considerably high. Next, I’ve learned that in every alfalfa field there are genetic mutations resulting in the occasional extremely tall stem (5-6 inches taller than every other plant). Stay away from these when making estimates or forage quality will be grossly underestimated. Further, once alfalfa begins to lodge, the precision of PEAQ will soon be lost. In some years this might be relatively early in the growth process, other years it might not happen at all. When looking for the stem with the most advanced stage, the protocol is clear that the bud or flower must be visible. Feeling a bud does not count as “bud stage.” In fact, I don’t even count “barely visible.” Finally, if you’re one of those like myself who continues to sample both fresh forage and use PEAQ, it’s likely that you’ll find the two results will be farthest apart (most often with PEAQ being the lower value) with early vegetative samples and the two methods will begin to converge as plants reach bud stage. Because PEAQ is based on a one-inch height, I like to also take my fresh plant samples at the same height.
It wasn’t too long after PEAQ’s entrance into the forage prediction game that the yard stick and spreadsheet table were replaced by the PEAQ stick. These were first made available by the former Wisconsin Forage Council and since have been manufactured by several private sector companies in addition to the Midwest Forage Association.