Precision Livestock Farming (PLF) also known as Precision Animal Management, is defined by Wathes as “the management of livestock production using the principles and technology of process engineering to monitor, model and manage animal production” 6. PLF applies technological advances to the monitoring of and data collection from individual animals within large herds with the hope of optimizing the welfare and contribution of each animal. While PLF for swine farming relies on new technology, it cannot be considered a new science. For example, in 1988, DeShazer et al.7 reported over 90 applications for image analysis in pig production. Applications and availability of precision livestock farming tools have greatly increased making it a field that should catch the attention of veterinarians and stock people alike. This increase is not limited to the livestock sector and across a wide variety of fields; the rate of technological advancement of the last two decades leaves even the most committed enthusiast in the dust. When we consider Moore’s Law8 - the principle that the number of transistors on an integrated circuit chip doubles approximately every two years - it is no wonder staying up-to-date seems to be a Sisyphean task. In 1971, a microprocessor housed approximately 2,308 transistors, while at the time of writing a microprocessor comfortably fits 19.2 billion. Perhaps a more relevant example based on the earlier discussion of machine learning, would be that of supercomputers’ computational ability; currently, the most powerful supercomputers can complete 93 trillion computations per second 8. Suddenly, with figures such as these in our minds it becomes increasingly easy to see how powerful machine learning is and how it could be a highly beneficial component of PLF.
While principally a review of scientific literature of PLF between 2012 to present, this series will review information gleaned from proprietary data, institutional input, market conditions and scholarly ethical assessments. It is provided as information targeting an emphasis of food animal welfare including (but not limited to) health, productivity, behavior, and physiological responses and as defined by American Veterinary Medical Association (AVMA) Welfare Principles9. Mention of trade names, products, commercial practices or organizations does not imply endorsement by the authors.
Analysis and decision making for agriculture - It is all in the algorithm.
An algorithm is a formula, or step-by-step set of operations, utilized to solve a specific problem or class of problems. A programming algorithm is a computer procedure that tells the computer precisely what steps to take to solve a problem utilizing inputs to determine the outputs. Programmers provide the human initiation of the process by writing the algorithm that instructs the computer how to perform the specific operations necessary to solve a problem. Machine learning, also referred to as deep learning, is a family of computational methods that allows an algorithm to program itself using large sets of examples that demonstrate the desired behavior. Because the computer “learns” from these example sets of existing data, a human is not constantly required to specify steps or rules for the computer to follow10. For example, algorithms are often used in research for determining gait kinematic patterns for conditions such as hip osteoarthritis 11, Parkinson’s disease 12, and multiple sclerosis 13 and show potential for future clinical use.
Machines Mimicking the Mind: Machine Learning
Data mining is the process by which useful information and trends are extracted from large databases and datasets, and swine veterinarians are accustomed to using the process to glean information on topics such as sow performance and history. The use of data mining can be observed in “information-provided” database software systems (i.e. PigCHAMP, Swine Management Systems, Cloudfarms, PigKnows, MetaFarms, Farms.com) that are driven by the input of observed data (e.g. days to first estrus, number of piglets born alive). In contrast to this, machine learning differs because it learns from a pool of probability models that best predict unobserved data. Beginning with group or individual patient-level observations, algorithms sift through variables searching for combinations that reliably predict outcomes. One of the greatest benefits of machine learning is its ability to use highly complex data, such as a collection of predictors, to produce vastly richer estimates than would be possible through standard statistical models10. This capacity allows for the use of new kinds of data, those whose sheer volume or complexity would previously have made analyzing them unimaginable.
Artificial neural networks (ANNs) are systems that can be a component of machine learning. They are modeled off of the design and function of the brain. In these systems, input data in the form of a number enters and is connected by synapses to neurons that perform specific calculations and output a result. ANNs can have many layers of synapses, allowing for complex calculations and even deeper machine learning. When presented with images and video, ANNs are particularly useful because they are capable of extracting many different data points simultaneously and recognizing patterns and trends within the image itself 14.
Machine learning particularly benefits from the use of open source access, a practice that allows programmers to collaborate to alter and improve algorithms. Open source is similar to a colleague discussion because it functions on the premise that more heads are better than one when it comes to resolving an issue. Open source allows people to freely access online local versions of algorithms, edit them to complete new tasks, and grow the code beyond its original release. For example, to train the computer a programmer will classify an item multiple times until the computer can classify it on its own. This particular form of machine learning can be expanded from classifying stationary objects in an image to classifying a moving object through the addition of an image tracking program that follows an item as it continues to classify. One open source program called YOLO (You Only Look Once) utilizes Darknet, an image tracking program. Open sourcing in the development of Darknet has allowed for classification in motion from one image every 20 seconds to an image every 1/20 of a second and improved tracking time by 1000 percent 15. For more information on object detection and tracking watch https://pjreddie.com/darknet/yolo/.
With this basic introduction to algorithms, machine learning, and PLF, the focus can now be turned to exploring the different types of technology that makes the essential collection of data possible. In recent years, stock people implementing PLF typically utilize the sensors initially developed for use with gaming systems such as Xbox, Halo TM, and Wii Connect. This off-label utilization of technology for agriculture carries along with it the benefit of widespread availability and consumer-driven lower costs giving livestock farmers easy and inexpensive access to 3-dimensional sensors, cameras, and microphones. Through the following series of articles, we will look at technologies currently in use and some that show potential for implementation in PLF practices.
Source: msu.edu