2.0.0-beta.4 - **BREAKING CHANGE**: Removed experimental `deserializer` interface and supporting code. Applications using this interface should migrate to the `Unmarshaler` interface by implementing `UnmarshalMaxMindDB(d *Decoder) error` instead. - `Open` and `FromBytes` now accept options. - **BREAKING CHANGE**: `IncludeNetworksWithoutData` and `IncludeAliasedNetworks` now return a `NetworksOption` rather than being one themselves. These must now be called as functions: `Networks(IncludeAliasedNetworks())` instead of `Networks(IncludeAliasedNetworks)`. This was done to improve the documentation organization. - Added `Unmarshaler` interface to allow custom decoding implementations for performance-critical applications. Types implementing `UnmarshalMaxMindDB(d *Decoder) error` will automatically use custom decoding logic instead of reflection, following the same pattern as `json.Unmarshaler`. - Added public `Decoder` type and `Kind` constants in `mmdbdata` package for manual decoding. `Decoder` provides methods like `ReadMap()`, `ReadSlice()`, `ReadString()`, `ReadUInt32()`, `PeekKind()`, etc. `Kind` type includes helper methods `String()`, `IsContainer()`, and `IsScalar()` for type introspection. The main `maxminddb` package re-exports these types for backward compatibility. `NewDecoder()` supports an options pattern for future extensibility. - Enhanced `UnmarshalMaxMindDB` to work with nested struct fields, slice elements, and map values. The custom unmarshaler is now called recursively for any type that implements the `Unmarshaler` interface, similar to `encoding/json`. - Improved error messages to include byte offset information and, for the reflection-based API, path information for nested structures using JSON Pointer format. For example, errors may now show "at offset 1234, path /city/names/en" or "at offset 1234, path /list/0/name" instead of just the underlying error message. - **PERFORMANCE**: Added string interning optimization that reduces allocations while maintaining thread safety. Reduces allocation count from 33 to 10 per operation in downstream libraries. Uses a fixed 512-entry cache with per-entry mutexes for bounded memory usage (~8KB) while minimizing lock contention.