The Wayback Machine - https://web.archive.org/web/20220713080507/https://github.com/GoogleChrome/lighthouse/issues/11593
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Properly size image" report is too strict? #11593

Open
bakura10 opened this issue Oct 24, 2020 · 12 comments
Open

"Properly size image" report is too strict? #11593

bakura10 opened this issue Oct 24, 2020 · 12 comments

Comments

@bakura10
Copy link

@bakura10 bakura10 commented Oct 24, 2020

Summary

Hi !

I am trying to get rid of the "Properly image size" but I am facing a wall here. Here are the kind of errors I get:

image

After checking, it appears that I serve a 800px image, the actual displayed size is 362px and as the DPI is 2, so to serve a perfect image I would need to serve 362x2 = 724px images. But as the design is fluid, it depends of course of the screen width.

In my srcset, I have images that range from 300px to 1000px by increment of 100px. The issue is that it seems the tolerance is super small, and I would need to serve much more images (maybe by increment of 20 or 30px) ?

I have two problems with that:

  • The document size would grow quite a lot (especially on product listing pages where I can have up to 50 products, so adding that many srcset will likely outgrows the minor win you can have at serving an exact size.
  • This will make a pretty poor usage of CDN: we are using Shopify to serve image, and whenever you ask for a given size, it generates on the fly a new copy at each edges. This initial resize can take a bit of time (especially if the original image is super large) which means that by having way too many srcset, a lot more of users will face the initial resizing, and the cache will fill much faster and therefore lead again to more resizing.

I would like to know what is the recommended practice here, and if being too strict here actually does not encourage a bad practice that would like to extremely large document size, and poor usage of CDN.

Thanks :)

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Oct 25, 2020

Thanks for filing @bakura10!

This audit could do with a bit of loosening, but a few other things to keep in mind too...

  • To avoid explosion of sizes and best utilize a CDN for fluid images it's recommended to select breakpoints that scale with the area of the image (aka the file size) rather than with a single dimension. This would probably solve your case too (though currently might get flagged when the image is smaller).
  • The document size increase per image typically pales in comparison to the savings on the image (~33KB here). If you're finding individual <picture> elements are even just 1KB after gzip, then something is seriously wrong, and they should probably be dynamically generated instead.

For Lighthouse team, inline with file size breakpoint approaches, I think we should ignore up to ~10-20KB of savings even on smaller images to encourage this best practice (if there is picture or srcset already detected, if they didn't use either, then we can keep the lower threshold 😃 ).

@bakura10
Copy link
Author

@bakura10 bakura10 commented Oct 26, 2020

@patrickhulce Thanks for the feedback. The GZip compression is a good point, I will try to do some test to check the impact.

For the first one I am not sure to follow you there. We have a fluid design so possibly an infinity of different sizes for the image based on the width of the screen. Do you mean maybe that we should provide fixed sizes in the srcset based on the 4-5 most popular sizes?

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Oct 26, 2020

Do you mean maybe that we should provide fixed sizes in the srcset based on the 4-5 most popular sizes?

No, for fluid sized situations such as yours I'm suggesting generating sizes in increments according to the area of the image rather than a single dimension (you mentioned that they "range from 300px to 1000px by increment of 100px" on its longest dimension).

Instead have breakpoints for every 0.1 megapixel increase in resolution (width * height, not just width or just height) or by file size like the method described in this post. If we decide to loosen our thresholds this is the sort of strategy we would respect.

@paulirish
Copy link
Member

@paulirish paulirish commented Nov 10, 2020

For Lighthouse team, inline with file size breakpoint approaches, I think we should ignore up to ~10-20KB of savings even on smaller images to encourage this best practice (if there is picture or srcset already detected, if they didn't use either, then we can keep the lower threshold 😃 ).

👍

@sahilcode17
Copy link

@sahilcode17 sahilcode17 commented Aug 19, 2021

is this issue still available to work on?

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Aug 19, 2021

Yes go for it @sahilcode17! You'll want to look at the uses-responsive-images audit for this one.

for (const image of images) {
// Give SVG a free pass because creating a "responsive" SVG is of questionable value.
// Ignore CSS images because it's difficult to determine what is a spritesheet,
// and the reward-to-effort ratio for responsive CSS images is quite low https://css-tricks.com/responsive-images-css/.
if (image.mimeType === 'image/svg+xml' || image.isCss) {
continue;
}
// Skip if we couldn't collect natural image size information.
if (!image.naturalDimensions) continue;

@priyang12
Copy link

@priyang12 priyang12 commented Sep 30, 2021

can i work on this issue if no one is working on this???

@adamraine
Copy link
Member

@adamraine adamraine commented Oct 1, 2021

can i work on this issue if no one is working on this???

Yes

@Mbellsudteen
Copy link

@Mbellsudteen Mbellsudteen commented Oct 2, 2021

@O2Graphics
Copy link

@O2Graphics O2Graphics commented Dec 4, 2021

Hello! :-)

Does anyone is working to fix this problem?
I have exactly the same issue as @bakura10. In my case, it's with "Moto G4", and DPI is 3 in Chrome's Lighthouse (or 2.625 for some strange reasons: https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/config/constants.js#L60-L63 and it looks like a hack: #10741 ).

@TripleEquals
Copy link
Contributor

@TripleEquals TripleEquals commented Mar 6, 2022

I believe I have the bandwidth to take this on, just looking for some clarification on the intent of the potential fix.

The intent as I understand it, is to allow the audit to pass even if the wasted bytes exceed the IGNORE_THRESHOLD_IN_BYTES threshold so long as the developer used scrcset in an attempt to pass the audit; but also the image should pass only if the wasted bytes is below a new, sliding threshold that is based on the image size. Is that correct?

@connorjclark
Copy link
Collaborator

@connorjclark connorjclark commented Mar 17, 2022

The intent as I understand it, is to allow the audit to pass even if the wasted bytes exceed the IGNORE_THRESHOLD_IN_BYTES threshold so long as the developer used scrcset in an attempt to pass the audit; but also the image should pass only if the wasted bytes is below a new, sliding threshold that is based on the image size. Is that correct?

Almost–although we want to encourage breakpoints by image area (not width or height), we weren't thinking of making that explicit in the audit. By introducing a IGNORE_THRESHOLD_IN_BYTES_BREAKPOINTS_PRESENT (which is ~10-20KB as @patrickhulce suggested), the audit would fail less often for folks whose breakpoints are based on area (which means that the delta between each image would be larger than if doing by just one dimension).

So no sliding threshold. Just a different threshold, if srcset/was in a picture element (these properties are already on the ImageElement artifact).

To avoid explosion of sizes and best utilize a CDN for fluid images it's recommended to select breakpoints that scale with the area of the image (aka the file size) rather than with a single dimension.

I couldn't find any reference to this in our docs, so we should probably add one. Anyone know the origin of this "best practice"?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests