sauropods.win is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for sauropod appreciators everywhere.

Administered by:

Server stats:

111
active users

What group is more diverse?

Plankton or Bugs?

Are plankton just... sea bugs?

@futurebird
some plankton are tiny arthropods. But there are many other kinds of plankton not closely related to arthropods (or to each other), diatoms, radiolarians, algae, cyanobacteria, so many more I don't know anything about.

I don't know if anyone has done any equivalent to the "deathfog a dozen different species of trees, see how many new insects fall out, use that to estimate total insect diversity" experiments for plankton.

@llewelly @futurebird Why do we still have to kill creatures just to know that they exist?

myrmepropagandist

@kechpaja @llewelly

I was saddened by the insect deaths that are used to catalog species, however if you are proposing trying to just photograph them all ... well. We'd probably only know about half of the insects we do now.

I assume most people who study insects like them and wouldn't kill a few 100 without it making good progress. And these death traps are nothing in comparison to the territory and environmental destruction that kills in the tens of millions.

@kechpaja @llewelly

I've mostly made my peace with the "destructive" ways of those who study arthropods.

Though I do feel sad still whenever I see dead bugs. It's the way they move, the things they do that make them so special.

I do wish that part of "descriptions" in collection were videos in addition to samples and written descriptions. There should be videos of every creature doing all of their basic tasks. We should collect that too!

@futurebird @kechpaja

I think the difficulty of doing such a survey without killing the insects has changed enormously since the 1980s (when Terry Erwin did the earliest studies.) Live insects are orders of magnitude easier to photograph precisely than they were back then. The focus stacking that makes so much detail visible on modern insect photos was a hugely time consuming, chemical-intensive process in the days of chemical film photography.

@futurebird @kechpaja @llewelly This was true in the past but seems silly today. Capture, high res photogrammetry, release should be trivial.

@dalias @kechpaja @llewelly

There are bugs you'll never find if you just keep exposing different zone of the leaf litter and trees to light.

These arthropods are *only* found when you do a destructive survey. IDK how to get around that.

That said, I do not have the expertise to say if destructive surveys are done too often, but I do think doing them on occasion can't be avoided.

@futurebird @kechpaja @llewelly Seems like you could do a sort of destructive capture short of blasting the zone with pesticides or whatever they do.

@futurebird @dalias @kechpaja @llewelly
It's an interesting example. The first thing that comes to my mind as a Buddhist (I have taken a vow not to harm sentient beings) is whether it is necessary. I will kill a tick that has already burrowed into my skin. But I don't kill a mosquito that wants to bite me. Because it's not necessary. Is it necessary to document all these insects as quickly as possible, even at the cost of killing them?

@dalias @kechpaja @llewelly

Also, and I didn't believe this until I got serious about learning to identify ants, even the very best photographs do not have the detail of a pinned specimen. This is because we are talking about a very complex three-dimensional form. You have a few excellent macro photos but not one of them lets you see the mandibles from behind, or the leg at the angle required.

I now "get" why a holotype has to be a dead bug. Because you often need to go back and look again.

@futurebird @kechpaja @llewelly That's why I said photogrammetry. It's taking thousands of photographs from all angles and deriving a 3D model.

@dalias @kechpaja @llewelly

*grumbling*

we willl see I suppose

But don't you need the bug to be dead to make that kind of model in the first place?

Trying to get insects to hold still so they can be documented goes way back.... including this incident in 1665 where Hooke got an ant drunk.

sauropods.win/@futurebird/1113

I hope no one ever dips me in brandy so they can draw me under over-bright harsh lights...

@futurebird @kechpaja @llewelly I'd do it with an array of high speed camera sensors (cheap nowadays) on a rapidly spinning gimbal setup where the whole capture would be done in a few microseconds.

@dalias @futurebird @kechpaja
I'd like to know how practical it would be to do thousand-angle photogrammetry with *every* insect in a huge tree that has tens of thousands of insects in it, and then go on to do it for dozens trees, on the budget of a taxonomist, rather than a techno fantasy budget. (I've read microscopic photogrammetry is now being used a lot in mite taxonomy, but they seem not to have the kinds of rigs that can do it fast enough to avoid having to immobilize the specimens.)

@dalias @llewelly @kechpaja

Could one design a box that could be clamped over a small creature and it would do the lighting and photos from all angles all in one go?

Or a box one could drop an insect into that would do such a multi-angle, perfectly lighted scan?

When I'm doing wild ant photography I tend to set up some bait and light it well, then hope the ants move into focus.

@futurebird @llewelly @kechpaja I think you'd want a box with clear floor halfway down to get the views from below.

@futurebird @dalias @llewelly @kechpaja yes, this is the kind of rig that was used to make a 3D model of Obama in a single shot, I'm sure its possible to make a simple small version using cheap cameras for insects.

You'll miss some parts, but at least you dont have to kill the specimen.

@mzedp @futurebird @dalias @llewelly @kechpaja

I think that photographing smaller objects than humans will require more expensive cameras, not cheaper cameras, because for macrophotography, having a large depth of field is expensive, and having a small depth of field can result in you having thousands of blurry photos from every possible angle, which would not be helpful.

@Leszek_Karlik @mzedp @futurebird @llewelly @kechpaja Large depth of field just needs pinhole camera, which in turn needs high sensitivity sensors, but these are not terribly expensive and it's an utter mystery to me why they're not more common.

@dalias @Leszek_Karlik @futurebird @llewelly @kechpaja Lens setup is more critical than camera, IMO. The OpenFlexure microscope uses an 8MP Raspberry pi camera sensor, and a custom made lens set up. There's a lot of cheap 12MP "action cams" floating around that should be easy to hack, I don't see why it would be impossible to set up a proper lens arrangement using those - even if it takes some extra optic hardware. build.openflexure.org/openflex

build.openflexure.orgAssembly Instructions

@dalias @Leszek_Karlik @mzedp @futurebird @llewelly @kechpaja There's a lot of <$20 stamp-sized Raspberry Pi-compatible cameras that can be run in arrays, and I wonder if any of the cheap ones are also capable of macro.

@mzedp @futurebird @dalias @llewelly @kechpaja would this technology be possible to be set up inside of a kill jar? This wouldnt work for pest sampling, where a pit trap is set up to catch any insects (and the odd rodent or lizard) that pass through to get a sample, but if it could be set up in a mason jar I could see some application

@futurebird @dalias @llewelly @kechpaja what kind of resolution do you need? what is the smallest feature you need to be able to distinguish?

@dalias @llewelly @futurebird @kechpaja if you do.it that way but you could also do structure from motion

@dalias @llewelly @futurebird @kechpaja you could make a thing that ran on a phone you just might have to spend a lot of time waving your phone around (assuming the thing you're trying to image doesn't move)

@dalias @futurebird @kechpaja @llewelly wouldn't we have to immobilize the animal though?

photogrammetry at a scale where you get source pictures comparable to microscopy needs focus stacking - meaning you have to mechanically move your optical setup or parts of it to get a single picture with everything in focus

@mmby @futurebird @kechpaja @llewelly There are cheap camera sensors on ali that make a room that's dark to the eye look like broad daylight with < 1/1000 s exposure. Set them up with pinhole lenses & focus is irrelevant.

@mmby @futurebird @kechpaja @llewelly Related: if I ever put together a DIY phone, this is what I'm using for the camera.

@dalias @futurebird @kechpaja @llewelly

my guess is that with pinholes you just get what you get - imaging resolution coupled to pinhole size (and to screen distance if one wants the optimal resolution), screen distance also being coupled to magnification, builtin spherical distortion

lenses and mirrors also have that, but each successive element can correct a bit

with lenses or mirrors you can trade properties against each other in design, with pinholes you just have to take what you get

@dalias @futurebird @kechpaja @llewelly still basically impossible with moving objects with current technology. and when comparing the energy and material resources it takes to calculate the 3d model from all those thousands of images to the one dead bug, that resource cost probably kills 100dreds.

(not to say that maybe one day we could make it be the less destructive path)

@dalias @futurebird @kechpaja @llewelly Conceptually, this is an awesome idea and I can see it clearly in my mind, having spent a while investigating (and failing at) photogrammetry for non-moving objects.

Nobody wants to hear that AI can do things, but AI algorithms might be exactly what's needed to stitch/extrapolate 3D models from thousands of photos of a moving bug. I would be really interested to see the results, in any case. I think the "moving" part creates some big challenges, even with a bunch of high-speed photos being taken (e.g., with high-speed video equipment).

@guyjantic @futurebird @kechpaja @llewelly Photogrammetry is already related to what the scammy buzzword crowd calls "AI". It's based on statistical models for extrapolating 3D models from 2D images. Real point-cloud-sampling 3D scanning is a better process that's been largely abandoned for photogrammetry, but for the application here where you can't keep the subject still for real 3D scanning, you actually want photogrammetry I think.

@dalias @futurebird @kechpaja @llewelly I knew that much about photogrammetry; I think I've read it uses fancy-stats/dumb-AI processes like maximum likelihood and markov chains. This makes sense to me.

@dalias @futurebird @kechpaja @llewelly there are snail species that can only be identified by differences in their penises (is that the correct plural form?)

@futurebird @dalias @kechpaja @llewelly Or, you get a load of smaller ants to carry tiny cameras and take photos from a variety of different angles.

Note: This does not work for the smallest ants.