ZDNET’s Senior Contributing Editor Eileen Yu writes in an article about how Jiva mitigates risks of inaccuracy and intellectual property infringements by training our AI-powered Crop Doctor feature with our own data.
Our Chief Technology Officer, Tejas Dinkar, explains how our field teams, with our network of farmers and agronomy experts, collected and annotated hundreds of thousands of images from the ground to train the AI model to identify crop pests and diseases and how to treat them.
Read the full article here.
By Eileen Yu, ZDNET Senior Contributing Editor
Organizations that want to harness generative artificial intelligence (AI) more effectively should use their own data to train AI systems, using foundation models as a starting point.
Doing so can provide more relevant context and allay concerns about the potential risks, such as inaccuracy and intellectual property infringements.
Accuracy, in particular, is a top priority for a company such as Jiva. The agritech vendor uses AI to power its mobile app, Crop Doctor, which identifies crop diseases via image processing and computer vision, and recommends treatments. It also taps AI to determine the credit worthiness of farmers who ask for cash advancements prior to a harvest and returns the loans when their harvest pays out.
It uses various AI and machine-learning tools, including Pinecorn, OpenAI, and scikit-learn, as well as Google's TensorFlow and Vertex AI. Jiva has operations in Singapore, Indonesia, and India.
It trains its AI models on thousands of annotated images for each disease, according to Jiva's CTO Tejas Dinkar. The agritech company has collected hundreds of thousands of images from the ground through its field teams and farmers who are part of Jiva's network and use its app AgriCentral, which is available in India.
Its field experts are involved in the initial collection and annotation of images, before these are passed on to agronomy experts who further annotate the images. These then are added to the training model used to identify plant disease.
For new crops or crops that its team of experts are less familiar with, Jiva brings in other platforms, such as Plantix, which have extensive datasets to power image recognition and diagnosis information.
It uses various AI and machine-learning tools, including Pinecorn, OpenAI, and scikit-learn, as well as Google's TensorFlow and Vertex AI. Jiva has operations in Singapore, Indonesia, and India.
It trains its AI models on thousands of annotated images for each disease, according to Jiva's CTO Tejas Dinkar. The agritech company has collected hundreds of thousands of images from the ground through its field teams and farmers who are part of Jiva's network and use its app AgriCentral, which is available in India.
Its field experts are involved in the initial collection and annotation of images, before these are passed on to agronomy experts who further annotate the images. These then are added to the training model used to identify plant disease.
For new crops or crops that its team of experts are less familiar with, Jiva brings in other platforms, such as Plantix, which have extensive datasets to power image recognition and diagnosis information.
Where there is a lack of robust AI models, humans can step back in.
For rare or highly specific crop issues, Dinkar noted that Jiva's team of agronomy experts can work with local researchers and field teams to resolve them.
The company's credit assessment team also overlays data generated by the AI systems with other information, he said. For example, the team may make an on-site visit and realize a crop is just recently ready for harvest, which the AI-powered system may not have taken into consideration when it generated the credit assessment.
"The objective is not to remove humans entirely, but to move them to areas they can amplify and [apply] adaptive thoughts, which machines aren't yet up to," Dinkar said.
Asked about challenges Jiva encountered with its generative AI adoption, he pointed to the lack of a standard prompt methodology across difference software versions and providers.
"True omni-lingualism" also is missing in LLMs, he said, while hallucination remains a key issue.
"Various large language models all have their own quirks [and] the same prompt techniques do not work across these," he explained. For instance, through refined prompt engineering, Jiva has been able to instruct its agronomy bot to clarify if it is unable to infer, from context, the crop that the farmer is referencing.
However, while this particular prompt performed well on GPT-3.5, it did not do as well on GPT-4, he said. It also does not work on a different LLM.
"The inability to reuse prompts across versions and platforms necessitates the creation of bespoke sets of prompt techniques for each one," Dinkar said. "As tooling improves and best practices emerge for prompting various large language models, we hope cross-platform prompts will become a reality."
Improvements are also needed in cross-language support, he said, pointing to strange responses that its chatbot sometimes generates that are out of context.