Viewing a single comment thread. View all comments

MrMunchkin t1_j7ld0nr wrote

It's both. JWST produces imaging data without any intervention by a human. Generally, that data is modeled by a human, but there's also a huge amount of these findings that are discovered by an algorithm, and have little to no human interaction to find.

32

axialintellectual t1_j7mjsi4 wrote

JWST doesn't. In this case, it's arguable (they picked it up on calibration data, which are taken regularly) that it sort of did, but given Webb's limited lifetime and extreme pressure on observing time, it's essentially always being directed to look at something, calibrating, or changing its orientation. It's not an automated survey telescope!

−1

MrMunchkin t1_j7qnst8 wrote

That's just not true. Because time is limited, they use JWST to point at a sector, and then use it to capture hundreds of composite images. Those images are processed by humans using algorithms, and in a lot of cases machine learning.

I think you're coming from the standpoint of a telescope on Earth, which has an extremely narrow view of space. With JWST, the images it takes are truly, truly massive and produce hundreds of gigabytes of data, which can be used to produce images.

0

axialintellectual t1_j7qpecg wrote

That does not - at all - resemble the work my colleagues and I are doing with JWST data. MIRI MRS has a FoV of 6.6'' x 7.7''; that's really quite large but it's not gigantic by any means (the size of the detector is impressive, but that's because this is an IFU). Also, I haven't seen particularly unusual amounts of machine learning in any of the data processing papers so far. Could you clarify what you're talking about here?

0

MrMunchkin t1_j7rb14a wrote

Yikes, there's too much to unpack here but I think what you're referencing is the images that are created from the archive. Are you familiar with the 3 stages of the pipeline?

Remember too, there are 10 detectors in the JWST, and the limit in the SSR is only 65GB, so much of the processing is done on board to reduce data excess. Tons more info can be found here: https://jwst-docs.stsci.edu/jwst-general-support/jwst-data-volume-and-data-excess

More info on the data pipeline can be found here: https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing#:~:text=The%20processing%20of%20JWST%20data%20goes%20through%203,%28slope%29%20images.%20Stage%202%20calibrates%20the%20slope%20images.

Also keep in mind JWST does thousands of exposures using many of the instruments. That data is accumulated in the SSR and is streamed every 12 hours or so to earth.

0

axialintellectual t1_j7rmtl1 wrote

> there's too much to unpack here

Well, no, there really isn't. You say Webb produces data 'without intervention by a human', and 'a huge amount of findings [are] produced by an algorithm'. That's a really weird way of putting it, because the vast majority of Webb time is obtained by individual projects designed to look at specific things, with dedicated analysis plans. Of course there's a nonneglible amount of bycatch, so to speak - but that's not what I read in your comment.

1