Jump to content

Chrissie

Members
  • Content Count

    548
  • Joined

  • Last visited

  • Days Won

    24

Chrissie last won the day on April 5

Chrissie had the most liked content!

2 Followers

About Chrissie

  • Rank
    Advanced Member

Profile Information

  • Gender
    Male
  • Location
    : Switzerland

Recent Profile Visitors

4,998 profile views
  1. Well, you got me on that one. I'm not a mathematician myself, but you can't really follow along without some understanding of the mathematics involved. I'll try anyways: Forget about the image for a moment, and just look at a scanline, or single row of pixels. You will see a pattern of intensity changes, depending on the nature of the subject which you were taking a picture of. Let's assume, you were taking a picture of a b&w checkerboard pattern. So your scanline could initially be represented by a square wave, alternating between zero intensity (i.e.: black) and maximum intensity (i.e.: white), and having sharp transitions between the two levels. So we're having lots of maximum intensity changes in our example. The jpeg compression is intended to reduce the storage requirements, at an "acceptable" loss of information. The way it's done involves approximating the square wave (in our example) by a set of suitable sine-wave patterns of suitable intensities, and superimposing those different sine waves. The compression algorithm then basically only stores the amplitude values of the respective sine waves which were taken into consideration, instead of storing each pixel's intensity value. Taking ever more sine waves at ever increasing frequencies (and ever more miniscule amplitude values) into consideration, you can approximate the original square wave at arbitrary precision. The "lossy" part of jpeg compression comes from "cutting off", or discarding the highest frequency parts of that approximation. See this image (and the surrounding article for further reference: The "K" value in the above image represents the multiple of the square wave's frequency, which was taken into consideration. You can see quite nicely, how cranking up the K-value approximates those sharp transitions ever better. And likewise, how discarding the highest frequencies only approximates the initial sharp (vertical) transition with a more gradual "slope" like transition. Which is, what you perceive as a "shadow" in your example. It also helps to know, that the jpeg compression involves a spatial downsampling into 8x8 (or even 16x16) blocks of pixels. Please consult my initial link to wikipedia on jpeg-compression for more information. Hope that helps a bit.
  2. I consider those "shades" to be artifacts from the (probably) jpeg-compression algorithm. This is a lossy compression algorithm, which is ... Source: https://en.wikipedia.org/wiki/JPEG#JPEG_compression Those "shadows" are also very pronounced at the left side of the almost vertical branch, right behind the bird's head. Note, that the most affected areas are indeed those with the "sharpest transition in intensity", like the highlighted portion in the above quote says. I would expect, that those artifacts are not visible in a raw editor program, like Capture One, which allows you to view and edit the "raw", i.e. uncompressed image data.
  3. Firmware updates are typically cumulative. That is: a more recent update contains all the fixes of all previous updates, including possibly fixes for things which previous updates goofed up on 😉
  4. @Pieter: quite to the contrary. I took the initial statement of the TO, that he "can't compromise on that" at face value. The very notion of "no compromise" doesn't lend itself to any - well - compromise. 😉 I pointed out, and backed this with quotes from Sony's original documentation, that Sony admits to compromising in the question of "full 4K" already is inconsistent about its claim the A7S3 can do 120fps at all. One could have expected, that such a main feature would have found its way into the documentation, if it existed at all. If the prospective initial recording device (Sony A7S3) can't even record at the full specs which the TO "can't compromise" on, then there is no point in discussing the secondary question of wether the A7S3 might be able to output at full specs, which it couldn't record at in the first place. Let alone the tertiary question, if the prospective external recording device might be able to record at full specs from a source, which couldn't ... I'm fully aware, that the TO would have preferred a statement which confirms his initial plans. I'm also fully confident, that he will be better off, both financially and result-wise, if a prerequisite, which he took for granted, gets some more scrutiny due to reasonable doubt. If that's not relevant in your view, then I couldn't care less. Because that's the way proper requirements analysis has to be done.
  5. I think you'll have to learn reading the fine print, and not to believe all hearsay or marketing hype. For instance: Sony itself claims the involves a "10% crop". (Click on the "6" footnote at the end of that heading). It's up to anyone's guess wether that is a 10 % crop at the overall pixel count, of if it's applied to both width and height of the image. In the latter case, that would amount to a 19% loss already. The same sources quotes the via HDMI at a maximum of 60fps. "Compatible recorders to be announced. (As of July 2020, Atomos Ninja V is expected.) See footnote 10 for that. Additionally, the help guide for the A7S3 lists the maximum duration for a recording at 600M on this page. Note, that even at 600M the maximum frame rate is posted as 60p. Finally, unleashing a factor of two (in the recordable frame rate) simply by means of a firmware update, without increasing the power of the underlying hardware, seems extremely unlikely to me. Remember: if something sounds too good to be true, it probably isn't.
  6. You (or someone) could figure this out analytically, doing the math, or pragmatically, which I want to propose: just borrow/rent a zoom lens, for instance the 24-105, use that to find out which focal length satisfies your requirements, find the fixed lens having the closest focal length _below_ the length you just found out, i.e. a little wider. Then buy that lens.
  7. Guys, while I'm a friend of fancy technical gimmicks myself, I can certainly understand both your enthusiasm and also your frustration if things don't work out as expected. That said: You are probably aware of the fact, that an external trigger like the MIOPS can only react after the fact, that is: after the beginning of the fact. Fortunately lightnings have a duration in the range of up to 100 ms (and also some repeat patterns), and if you (or your external trigger) are fast enough, you may still catch some of the action while it's still going on. The reaction bit involves some sort of a delay, and what you are apparently striving at, and where I can be of no help, is, to keep that delay as short as possible, by means of clever Sony alpha body settings. Fine enough. If your sole objective is to get some nice lightning shots, I'd like to propose front-running the lightnings, instead of chasing them 😉 A technique, which I've read about and employed myself to good results is the following: Obviously, you're going to setup everything only if you can reasonably expect some lightning strikes within the next couple of minutes. A.k.a.: a thunderstorm approaching. The front-running technique goes like this: set up your camera on a tripod and take a shot with the focus on the foreground. set your camera to bulb mode (like: 30 seconds) and focus to infinity release the trigger and hope for a lightning strike within the next 30 seconds or so. Repeat as needed Composite shots of actual lightnings and foreground shot in image processing software. Good luck, and I'd like to see some results, please 😉
  8. Dan, sorry for your mishap! I happen to own the same lens, and checked mine across the full focal range: The "uneven" structure at about half radial distance from the center is not visible in my copy of this lens. I'm afraid, something is in fact broken inside of your lens. Since you were willing to spend 2500 bucks to get a top quality lens, you apparently expect top quality in return. Even if it hurts, financially, I'd recommend to send it in for inspection and repair. Maybe ask for a quote first. Although I have no idea about the cost range. (The top end of the possible cost range should be at 2500 max. ) It might be worth to check, if you unknowingly bought an insurance against drop damages along the purchase, like I did with mine. Good luck!
  9. I hope it's nothing serious, and get well soon! The lens hood, whatever its kind, can certainly wait. Hang in there!
  10. Apparently you already have all components ready at your disposal. Why don't YOU try both ways and tell US about any differences?!
  11. While I like @tadwil's suggestion of how to determine the given level of vignetting via taking a picture of a white wall, I'm a little confused as to where might be. If it's the object-facing tips of the lens hood petals, then grinding those down would not alleviate the vignetting, which is typically happening at the corners of a picture, caused by excess material at the transition between any two adjacent lens hood petals. If you're talking about the lens facing side of the lens hood, then grinding that circular face down would indeed be unnecessarily tedious, just to remove material from the object-facing areas between petals. Especially if the hood is made out of metal. My Sony ones are all plastic, and serve their purpose perfectly well.
  12. Does the stripe pattern change its shape/position, when you press lightly on the screen, like in a touch-screen operation? If so, the pattern is caused by some slight "inner mechanical tension", which I would consider harmless.
  13. In extension to @Pieter's already perfect explanation, please note how the shape of a lens hood is tied to a specific focal distance / front element diameter combination by the following visualizations. The front element diameter corresponds to the blue tube diameter, which mathematically intersects with the viewing pyramid (aspect ratio being 3:2 as in your full frame sensor). See how a varying focal distance results in a corresponding change in the shape of the lens hood petals: Wide angle: intermediate angle: Tele angle:
  14. Some folks suggest explicitly setting the lens's AF/MF switch to MF. Then it should retain its set focus across shutdowns of the body.
  15. Converting individual images/fotos into a movie: Take a look at ffmpeg, open source, free of charge, and available for all major OSses.
×
×
  • Create New...