Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-24.2: colexec: fix type schema corruption in an edge case #133761

Open
wants to merge 1 commit into
base: release-24.2
Choose a base branch
from

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented Oct 30, 2024

Backport 1/1 commits from #133624 on behalf of @yuzefovich.

/cc @cockroachdb/release


This commit fixes type schema corruption in the vectorized engine in an edge case. In particular, consider the following circumstances:

  • during the physical planning, when creating a new stage of processors, we often reuse the same type slice (stored in
    InputSyncSpec.ColumnTypes) that we get from the previous stage. In other words, we might have memory aliasing, but only on the gateway node because the remote nodes get their specs deserialized and each has its own memory allocation.
  • throughout the vectorized operator planning, as of 85fd4fb, for each newly projected vector we append the corresponding type to the type slice we have in scope. We also capture intermediate state of the type slice by some operators (e.g. BatchSchemaSubsetEnforcer).
  • as expected, when appending a type to the slice, if there is enough capacity, we reuse it, meaning that we often append to the slice that came to us via InputSyncSpec.ColumnTypes.
  • now, if we have two stages of processors that happened to share the same underlying type slice with some free capacity AND we needed to append vectors for each stage, then we might corrupt the type schema captured by an operator for the earlier stage when performing vectorized planning for the later stage.

The bug is effectively the same as the comment deleted by 85fd4fb outlined:

// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.

The only differences are:

  • aliasing of the type slice occurs via the InputSyncSpec.ColumnTypes that is often used as the starting points for populating NewColOperatorResult.ColumnTypes which is used throughout the vectorized operator planning
  • columns are "projected out" by sharing the type schema between two stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length right before we get into the vectorized planning. This will make it so that if we need to append a type, then we'll make a fresh allocation, and any possible memory aliasing with a different stage of processors will be gone.

I haven't quite figured out the exact conditions that are needed for this bug to occur, but my intuition says that it should be quite rare in practice (otherwise we'd have seen this much sooner given that the offending commit was merged more than a year ago and was backported to older branches).

Fixes: #130402.

Release note (bug fix): Previously, CockroachDB could encounter an internal error of the form interface conversion: coldata.Column is in an edge case and this is now fixed. The bug is present in versions 22.2.13+, 23.1.9+, 23.2+.


Release justification: bug fix.

This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
@blathers-crl blathers-crl bot requested a review from a team as a code owner October 30, 2024 00:26
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-24.2-133624 branch from bf0e606 to 1ed8233 Compare October 30, 2024 00:26
@blathers-crl blathers-crl bot requested review from rytaft and removed request for a team October 30, 2024 00:26
@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels Oct 30, 2024
Copy link
Author

blathers-crl bot commented Oct 30, 2024

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Backports should only be created for serious
    issues
    or test-only changes.
  • Backports should not break backwards-compatibility.
  • Backports should change as little code as possible.
  • Backports should not change on-disk formats or node communication protocols.
  • Backports should not add new functionality (except as defined
    here).
  • Backports must not add, edit, or otherwise modify cluster versions; or add version gates.
  • All backports must be reviewed by the owning areas TL. For more information as to how that review should be conducted, please consult the backport
    policy
    .
If your backport adds new functionality, please ensure that the following additional criteria are satisfied:
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters. State changes must be further protected such that nodes running old binaries will not be negatively impacted by the new state (with a mixed version test added).
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.
  • Your backport must be accompanied by a post to the appropriate Slack
    channel (#db-backports-point-releases or #db-backports-XX-X-release) for awareness and discussion.

Also, please add a brief release justification to the body of your PR to justify this
backport.

@blathers-crl blathers-crl bot added the backport Label PR's that are backports to older release branches label Oct 30, 2024
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@yuzefovich yuzefovich requested review from mgartner and removed request for rytaft October 30, 2024 01:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport Label PR's that are backports to older release branches blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants