How Old Can a File Be?
Summary
TLDRThis video dives into the quirky and inconsistent ways different operating systems handle file timestamps. While Unix-based systems like Linux use the Unix epoch (January 1st, 1970), Windows starts from January 1st, 1601, and macOS struggles with some oddities, claiming to support distant past dates but only going back to September 21st, 1677 in practice. The video explores the reasons behind these arbitrary dates and their implications, ultimately revealing how operating system choices and legacy code affect how far back a file’s timestamp can go.
Takeaways
- 😀 Windows file timestamps start from January 1st, 1601, based on the Gregorian calendar cycle.
- 😀 Linux uses Unix time starting from January 1st, 1970, and supports both past and future dates using a signed 32-bit integer.
- 😀 Mac OS uses the Apple File System (APFS) with an earliest date of September 21st, 1677, despite claiming to represent dates as far back as the year 1.
- 😀 Windows file Explorer displays timestamps starting from January 1st, 1980, due to limitations from MS-DOS, even though NTFS supports dates back to 1601.
- 😀 The Unix epoch (January 1st, 1970) serves as the reference for time representation in Linux, and it is the most consistent system across file types.
- 😀 Mac OS has oddities with timestamp handling, such as inconsistent time zone conversions in Finder and unsupported future dates beyond April 11th, 2262.
- 😀 While Linux handles time well, Mac OS has various arbitrary choices like using September 21st, 1677, as the starting point for file timestamps.
- 😀 The reason for Windows' 1601 starting point is tied to the Gregorian calendar cycle, which was important when Windows NT was developed.
- 😀 The maximum future date in Linux and Windows is theoretically bound by the integer size (32-bit or 64-bit), with some inconsistencies in display across systems.
- 😀 Mac OS relies on a signed 64-bit integer to represent time since the Unix epoch, with its own quirks like showing incorrect timestamps in Finder and terminal.
- 😀 Time representation in operating systems is riddled with historical and arbitrary decisions, and no system is entirely consistent or intuitive when it comes to handling very old dates.
Q & A
What is Unix time, and why is January 1st, 1970 significant?
-Unix time is a system for tracking time, where the starting point, known as 'the epoch,' is January 1st, 1970. This date serves as the reference from which time is measured by incrementing a counter every second.
How does Windows represent file timestamps?
-Windows uses a file time system, with timestamps measured in 100-nanosecond intervals since January 1st, 1601. This system uses unsigned integers, so it can only represent positive values, with January 1st, 1601 being the earliest possible timestamp.
Why does Windows use January 1st, 1601 as its reference date?
-The choice of January 1st, 1601, as the reference date for Windows is tied to the start of the Gregorian calendar cycle, which was adopted during the development of Windows NT. The date marks the beginning of a full 400-year calendar cycle.
What limitation exists in Windows File Explorer related to file timestamps?
-Although Windows can represent file timestamps as early as January 1st, 1601, File Explorer can only display timestamps from January 1st, 1980, onward. This limitation is due to the legacy behavior of MS-DOS and older systems.
How does Linux handle file timestamps, and what makes it unique?
-Linux uses Unix time and stores file timestamps as signed 32-bit integers, allowing for the representation of both past and future dates. This gives Linux the ability to represent timestamps as early as December 13th, 1901. The system is consistent and reliable across distributions.
What is the earliest date that Linux can represent?
-Linux can represent dates as early as December 13th, 1901, due to its use of a signed 32-bit integer to store timestamps.
How does macOS handle file timestamps, and why is it considered inconsistent?
-macOS uses multiple systems for file timestamps, including the APFS file system and the set file command. It claims to support timestamps from as early as December 30th, year 1, but in practice, it can only reliably represent dates from September 21st, 1677, causing significant inconsistencies across different methods.
Why is the set file command in macOS limited to dates from 1904 onward?
-The set file command in macOS is limited to dates from 1904 due to legacy decisions within macOS’s file system and the underlying handling of file timestamps. It doesn’t account for earlier dates, unlike other methods available on the system.
What is the 'distant past' date in macOS, and how is it determined?
-In macOS, the 'distant past' date corresponds to December 30th, year 1. This date is defined by the Swift programming language’s Date class but lacks a clear explanation for its choice. It's likely an arbitrary reference date used for historical dates, though it doesn’t align with other common reference dates like Unix time.
What is the maximum date that macOS can represent, and how does it compare to Unix time?
-The furthest future date macOS can represent is April 11th, 2262, which is consistent with a 64-bit signed integer representation of time (essentially Unix time). This contrasts with Unix time’s 32-bit limitation, which restricts it to the year 2038 for 32-bit systems.
Outlines
This section is available to paid users only. Please upgrade to access this part.
Upgrade NowMindmap
This section is available to paid users only. Please upgrade to access this part.
Upgrade NowKeywords
This section is available to paid users only. Please upgrade to access this part.
Upgrade NowHighlights
This section is available to paid users only. Please upgrade to access this part.
Upgrade NowTranscripts
This section is available to paid users only. Please upgrade to access this part.
Upgrade NowBrowse More Related Video
5.0 / 5 (0 votes)