FEB 23, 2021 - EASTERN SEARCH & SURVEY: USING AN AI MODEL TO HELP IDENTIFY NEW WRECKS OFF A HANG LIST

Learn from the top wreck and bottom fisherman in the game! Full discussions on wreck fishing along with all things related to bottom fishing!!

FEB 23, 2021 - EASTERN SEARCH & SURVEY: USING AN AI MODEL TO HELP IDENTIFY NEW WRECKS OFF A HANG LIST

Postby EC NEWELL MAN » Tue Feb 23, 2021 1:13 pm

Over the decades in searching for wrecks, we have never had even a thought that AI modeling could be possible, but somehow Captain Ben at EASTERN SEARCH & SURVEY has just taken it to the next level. Please read on using technology to literally pick out and identify from hang logs the best locations to hunt for a wreck..........

Image
Subset of raw database prior to filtering (~64k unique points). Not exactly obvious to know where to focus!
====================


Winter project update: practical application of artificial intelligence /
Data science to shipwreck hunting:


With our database of unidentified underwater objects swelling to over 100,000 points recently, it has become increasingly difficult to select and prioritize the most promising shipwreck candidates.

Modern machine learning ("ML" or "AI") techniques seemed appropriate for the task and were therefore deployed in a major overhaul of our search tools. Other researchers have used AI & computer vision to parse through bottom imagery (multibeam sonar data) to distinguish shipwrecks from seabed, but our challenge is different and mostly concerns areas for which sonar data does not yet exist.

To our knowledge, nobody had applied AI to "hang logs" (lists of obstructions compiled by commercial fishing vessels that drag their gear along the bottom) before, so we decided to experiment.

It all started in spring 2018 when Alex Barnard and I (Ben Roberts) decided to invest in a side scan sonar and start searching for shipwrecks. We had been diving locally (NY/NJ) since 2007 and became especially interested in the numerous "virgin" shipwrecks known to exist in local waters. The sonar greatly accelerated our exploratory efforts, not just by searching but also by imaging and vetting prospective sites prior to diving them (it's possible to scan many more sites in a day than the human body can dive).

Almost all the initial prospects turned out to be duds, so sometimes at the end of a long day we would scan a "known" wreck, mostly just so we'd have at least something to show for the day's efforts. Soon we realized that those images were helpful for divers and fishermen, and we’re now learning that records of those failed searches are also useful but in a different way.

Over the next three years we stepped up our search efforts by building a large database of prospects to explore, painstakingly assembled from dozens of sources including commercial fishermen, sport fishermen, historical records, books and other sources: just about anything we could get our hands on.

Two years ago we hit an inflection point as the database grew from thousands of points to many tens of thousands, and the challenge evolved from collecting points to finding new ways to process and filter them: separating the “wheat from the chaff".

We chose to build the database in Microsoft Excel given its familiarity, flexibility, ease of use and customization potential (especially via the integrated VBA programming language). This allowed us to create custom tools to perform various analyses, generate search reports and plans, even integrate with our sonar software and boat's electronics (chartplotter & autopilot).

We purchased Andren Software’s SeaMarks program for LORAN-GPS coordinate conversions, and carefully assembled “calibration points” to ensure maximum accuracy.

So far most of the time/effort has been spent on more mundane tasks like data cleaning & entry, given the handwritten nature of many of the source records (mostly incompatible with automated text recognition / OCR software).

One of the functions we built was a search algorithm designed to help prioritize wreck prospects / scanning targets. The initial version used site characteristics that we thought would work well based on past experience.

To oversimplify: it scanned through hang logs looking for sites that were named or had commentary (like "shipwreck" or "bad hang") or were near points from a different source, then gathered other nearby points and assigned the resulting "cluster" a probability score based on several factors such as the number and type of points, their density (spacing), presence of certain keywords, and others.

This approach worked fairly well, but there is a wide variation in how obstructions are logged by commercial fishermen - not to mention, lots of inaccuracies - so a more robust approach was needed as we integrated more and more points from disparate sources.

The first step in developing an improved algorithm was to analyze the sites we've already scanned and quantify patterns in their corresponding point clusters. So, we coded a tool that generates ~120 factors to define each site. The hardest part of this exercise was ensuring that the tool didn't report any changes/updates that we made to our database after scanning the sites (the process was therefore kind of like unscrambling an egg).

The result was a list of about 700 sites, which we filtered down to ~400 by excluding certain ones less relevant to future offshore searches (such as inshore sites with widely-known/published locations).

The filtered site list was then fed into a machine learning algorithm using a free statistical software program called "R." Of the many ML 'algos' available, we chose "boosting," which basically creates many decision trees, learns through trial and error, and ultimately calculates results by taking a weighted average across all the trees’ individual results.

To assess the new model’s accuracy in correctly classifying wreck sites, we used a common technique known as "cross-validation": you "train" the model using a subset of your data (typically 50%-90%), then "test" its accuracy on the remainder.

The basic idea is that the process of fitting a model to training data will result in a low (and relatively meaningless) error rate by definition, so a better way to predict real-world performance is to test it on new data that it hasn't seen before, like a teacher testing their student’s ability to apply learned knowledge to an unfamiliar problem.

In our case, this “test error” rate was found to be minimal (optimized) using a model containing about 1,000 decision trees, which produced a classification accuracy rate of over 80% (see third image attached).

Boosted models themselves are hard to interpret (what is a weighted average of 1,000 decision trees?) but they can illuminate the most important factors of the underlying problem, which in this case include cluster point count and density (see second image attached). Surprisingly, most names & commentary were relatively unimportant, perhaps because they're used in such different ways by various sources.

So, how to practically apply this tool to future searches? One idea was to re-craft our Excel search algorithm into a single decision tree based on the most important factors determined by boosting, but that would not have been straightforward to build and its accuracy would likely have been sub-optimal.

As it turns out, with a little creativity, it’s not difficult to get Excel to communicate with other programs, so we integrated “R” into the search tool. This approach preserves the best of both worlds: the conveniences/flexibility of Excel combined with the power of modern machine learning.

The most challenging part was creating a new clustering algorithm (borrowing from computer vision concepts) in Excel/VBA to gather & filter the data that gets fed into the ML algorithm for probability assessment / classification.

Initial results from the new search tool appear promising, with new prospects revealed. Accuracy is likely to improve further over time as more sites are scanned and more data is collected.

We live in a golden age for offshore shipwreck exploration: we're close enough to the past that many historical wrecks still survive the destructive effects of storms, draggers and the hostile salt water environment, yet we’re far enough into the future that technologies such as side scan sonar, rebreathers and artificial intelligence are generally affordable and widely accessible. It is truly a unique confluence in history.


Image
"Relative influence" plot of the most important factors used by the machine learning algorithm.

Image
"Test error" plot showing cross-validated error rates vs # of decision trees used in boosted model. In this case the minimum error rate occurs at about 1,000 trees.

Image
Output: shipwreck prospects, scored with probability estimates. This plot excludes wrecks we've confirmed or know to be confirmed.

Ben.....

I have never even considered such a process on this scale, and in part maybe it was due to the countless hours in the proverbial "connecting of the hang # dots" from all the various sources we have on hand. The problem as we quickly learn is that hang numbers provided by different captains and as much different mobile gears sources over the decades have an inherent high level of inaccuracy in where the gear hung and to where the captain 'chicken scratched' what we worked with in Loran A and then Loran C locations.

The other problem is of course due to the nature of the environment, that being the ocean and their impact to wrecks, especially olden wooden wrecks which figuratively "melt away" over time. One last factor that we do enjoy in going back to old hang logs in trying to locate anything we may have glossed over or new, is that various vessels have traveled multiple times throughout our region, and have scanned these areas with the standard bottom machines, and if an obstruction is seen, the location is documented.

Do we miss some shipwreck in-between the points of Montauk and Cape May....maybe so, sure we continue to wonder where the fv GREEN EYES went down off Fire Island. Commendable efforts here Ben in taking this to the next level.....


Eastern Search & Survey...

Ec Newell Man all great points, thanks Steve! You're right, there's so many sources of potential error - and/or wrecks that used to be prominent obstructions but have been reduced or buried over time - it's a major challenge to sift through it all, and probably impossible without a lot of data and computing power.

To that end much of this is owed to you (thanks again!!). Hopefully the ~82% accuracy (~18% error rate) from the tests is indicative of how it will perform on the next batch of real-world scanning targets. And hopefully that will result in some interesting new finds this season - we'll know soon either way!

User avatar
EC NEWELL MAN
 
Posts: 504
Joined: Mon Oct 13, 2008 1:58 pm

Return to WRECK & BOTTOM FISHING CENTRAL

Who is online

Users browsing this forum: No registered users and 8 guests



Designed & Hosted by:

Copyright ©2009 by Fishing United
All Rights Reserved