vendor: Update minio/sha256-simd (#5433)

* vendor: Update minio/sha256-simd

* Add go module stuff
This commit is contained in:
Audrius Butkevicius 2019-01-05 09:21:42 +00:00 committed by Jakob Borg
parent 158559023e
commit ad30192dca
24 changed files with 2600 additions and 3002 deletions

2
go.mod
View File

@ -20,7 +20,7 @@ require (
github.com/kr/pretty v0.1.0 // indirect
github.com/lib/pq v1.0.0
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/minio/sha256-simd v0.0.0-20171213220625-ad98a36ba0da
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3 // indirect
github.com/onsi/gomega v0.0.0-20171227184521-ba3724c94e4d // indirect
github.com/oschwald/geoip2-golang v1.1.0

2
go.sum
View File

@ -41,6 +41,8 @@ github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0j
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/minio/sha256-simd v0.0.0-20171213220625-ad98a36ba0da h1:tazA5y1hWYJO8VSYbU36yBhXeIvruLXMUKu6WBtcJck=
github.com/minio/sha256-simd v0.0.0-20171213220625-ad98a36ba0da/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128 h1:hEDK0Zao06IGlO1ada0FLT2g3KEot2vCqFp8gdvJqzM=
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3 h1:ZN7kHmC0iunA+4UPmERwsuMQan4lUnntO6WX6H1jOO8=
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20171227184521-ba3724c94e4d h1:r351oUAFgdsydkt/g+XR/iJWRwyxVpy6nkNdEl/QdAs=

View File

@ -1,21 +0,0 @@
sudo: required
dist: trusty
language: go
os:
- linux
- osx
osx_image: xcode7.2
go:
- 1.6
- 1.5
env:
- ARCH=x86_64
- ARCH=i686
script:
- diff -au <(gofmt -d .) <(printf "")
- go test -race -v ./...

View File

@ -1,120 +0,0 @@
# sha256-simd
Accelerate SHA256 computations in pure Go using AVX512 and AVX2 for Intel and ARM64 for ARM. On AVX512 it provides an up to 8x improvement (over 3 GB/s per core) in comparison to AVX2.
## Introduction
This package is designed as a replacement for `crypto/sha256`. For Intel CPUs it has two flavors for AVX512 and AVX2 (AVX/SSE are also supported). For ARM CPUs with the Cryptography Extensions, advantage is taken of the SHA2 instructions resulting in a massive performance improvement.
This package uses Golang assembly. The AVX512 version is based on the Intel's "multi-buffer crypto library for IPSec" whereas the other Intel implementations are described in "Fast SHA-256 Implementations on Intel Architecture Processors" by J. Guilford et al.
## New: Support for AVX512
We have added support for AVX512 which results in an up to 8x performance improvement over AVX2 (3.0 GHz Xeon Platinum 8124M CPU):
```
$ benchcmp avx2.txt avx512.txt
benchmark AVX2 MB/s AVX512 MB/s speedup
BenchmarkHash5M 448.62 3498.20 7.80x
```
The original code was developed by Intel as part of the [multi-buffer crypto library](https://github.com/intel/intel-ipsec-mb) for IPSec or more specifically this [AVX512](https://github.com/intel/intel-ipsec-mb/blob/master/avx512/sha256_x16_avx512.asm) implementation. The key idea behind it is to process a total of 16 checksums in parallel by “transposing” 16 (independent) messages of 64 bytes between a total of 16 ZMM registers (each 64 bytes wide).
Transposing the input messages means that in order to take full advantage of the speedup you need to have a (server) workload where multiple threads are doing SHA256 calculations in parallel. Unfortunately for this algorithm it is not possible for two message blocks processed in parallel to be dependent on one anotherbecause then the (interim) result of the first part of the message has to be an input into the processing of the second part of the message.
Whereas the original Intel C implementation requires some sort of explicit scheduling of messages to be processed in parallel, for Golang it makes sense to take advantage of channels in order to group messages together and use channels as well for sending back the results (thereby effectively decoupling the calculations). We have implemented a fairly simple scheduling mechanism that seems to work well in practice.
Due to this differrent way of scheduling, we decided to use an explicit method to instantiate the AVX512 version. Essentially one or more AVX512 processing servers ([`Avx512Server`](https://github.com/minio/sha256-simd/blob/master/sha256blockAvx512_amd64.go#L294)) have to be created whereby each server can hash over 3 GB/s on a single core. An `hash.Hash` object ([`Avx512Digest`](https://github.com/minio/sha256-simd/blob/master/sha256blockAvx512_amd64.go#L45)) is then instantiated using one of these servers and used in the regular fashion:
```go
import "github.com/minio/sha256-simd"
func main() {
server := sha256.NewAvx512Server()
h512 := sha256.NewAvx512(server)
h512.Write(fileBlock)
digest := h512.Sum([]byte{})
}
```
Note that, because of the scheduling overhead, for small messages (< 1 MB) you will be better off using the regular SHA256 hashing (but those are typically not performance critical anyway). Some other tips to get the best performance:
* Have many go routines doing SHA256 calculations in parallel.
* Try to Write() messages in multiples of 64 bytes.
* Try to keep the overall length of messages to a roughly similar size ie. 5 MB (this way all 16 lanes in the AVX512 computations are contributing as much as possible).
More detailed information can be found in this [blog](https://blog.minio.io/accelerate-sha256-up-to-8x-over-3-gb-s-per-core-with-avx512-a0b1d64f78f) post including scaling across cores.
## Drop-In Replacement
The following code snippet shows how you can use `github.com/minio/sha256-simd`. This will automatically select the fastest method for the architecture on which it will be executed.
```go
import "github.com/minio/sha256-simd"
func main() {
...
shaWriter := sha256.New()
io.Copy(shaWriter, file)
...
}
```
## Performance
Below is the speed in MB/s for a single core (ranked fast to slow) for blocks larger than 1 MB.
| Processor | SIMD | Speed (MB/s) |
| --------------------------------- | ------- | ------------:|
| 3.0 GHz Intel Xeon Platinum 8124M | AVX512 | 3498 |
| 1.2 GHz ARM Cortex-A53 | ARM64 | 638 |
| 3.0 GHz Intel Xeon Platinum 8124M | AVX2 | 449 |
| 3.1 GHz Intel Core i7 | AVX | 362 |
| 3.1 GHz Intel Core i7 | SSE | 299 |
## asm2plan9s
In order to be able to work more easily with AVX512/AVX2 instructions, a separate tool was developed to convert SIMD instructions into the corresponding BYTE sequence as accepted by Go assembly. See [asm2plan9s](https://github.com/minio/asm2plan9s) for more information.
## Why and benefits
One of the most performance sensitive parts of the [Minio](https://github.com/minio/minio) object storage server is related to SHA256 hash sums calculations. For instance during multi part uploads each part that is uploaded needs to be verified for data integrity by the server.
Other applications that can benefit from enhanced SHA256 performance are deduplication in storage systems, intrusion detection, version control systems, integrity checking, etc.
## ARM SHA Extensions
The 64-bit ARMv8 core has introduced new instructions for SHA1 and SHA2 acceleration as part of the [Cryptography Extensions](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0501f/CHDFJBCJ.html). Below you can see a small excerpt highlighting one of the rounds as is done for the SHA256 calculation process (for full code see [sha256block_arm64.s](https://github.com/minio/sha256-simd/blob/master/sha256block_arm64.s)).
```
sha256h q2, q3, v9.4s
sha256h2 q3, q4, v9.4s
sha256su0 v5.4s, v6.4s
rev32 v8.16b, v8.16b
add v9.4s, v7.4s, v18.4s
mov v4.16b, v2.16b
sha256h q2, q3, v10.4s
sha256h2 q3, q4, v10.4s
sha256su0 v6.4s, v7.4s
sha256su1 v5.4s, v7.4s, v8.4s
```
### Detailed benchmarks
Benchmarks generated on a 1.2 Ghz Quad-Core ARM Cortex A53 equipped [Pine64](https://www.pine64.com/).
```
minio@minio-arm:$ benchcmp golang.txt arm64.txt
benchmark golang arm64 speedup
BenchmarkHash8Bytes-4 0.68 MB/s 5.70 MB/s 8.38x
BenchmarkHash1K-4 5.65 MB/s 326.30 MB/s 57.75x
BenchmarkHash8K-4 6.00 MB/s 570.63 MB/s 95.11x
BenchmarkHash1M-4 6.05 MB/s 638.23 MB/s 105.49x
```
## License
Released under the Apache License v2.0. You can find the complete text in the file LICENSE.
## Contributing
Contributions are welcome, please send PRs for any enhancements.

View File

@ -1,32 +0,0 @@
# version format
version: "{build}"
# Operating system (build VM template)
os: Windows Server 2012 R2
# Platform.
platform: x64
clone_folder: c:\gopath\src\github.com\minio\sha256-simd
# environment variables
environment:
GOPATH: c:\gopath
GO15VENDOREXPERIMENT: 1
# scripts that run after cloning repository
install:
- set PATH=%GOPATH%\bin;c:\go\bin;%PATH%
- go version
- go env
# to run your custom scripts instead of automatic MSBuild
build_script:
- go test .
- go test -race .
# to disable automatic tests
test: off
# to disable deployment
deploy: off

View File

@ -16,78 +16,104 @@
package sha256
// True when SIMD instructions are available.
var avx512 = haveAVX512()
var avx2 = haveAVX2()
var avx = haveAVX()
var ssse3 = haveSSSE3()
var avx512 bool
var avx2 bool
var avx bool
var sse bool
var sse2 bool
var sse3 bool
var ssse3 bool
var sse41 bool
var sse42 bool
var popcnt bool
var sha bool
var armSha = haveArmSha()
// haveAVX returns true when there is AVX support
func haveAVX() bool {
_, _, c, _ := cpuid(1)
func init() {
var _xsave bool
var _osxsave bool
var _avx bool
var _avx2 bool
var _avx512f bool
var _avx512dq bool
// var _avx512pf bool
// var _avx512er bool
// var _avx512cd bool
var _avx512bw bool
var _avx512vl bool
var _sseState bool
var _avxState bool
var _opmaskState bool
var _zmmHI256State bool
var _hi16ZmmState bool
// Check XGETBV, OXSAVE and AVX bits
if c&(1<<26) != 0 && c&(1<<27) != 0 && c&(1<<28) != 0 {
// Check for OS support
eax, _ := xgetbv(0)
return (eax & 0x6) == 0x6
}
return false
}
// haveAVX2 returns true when there is AVX2 support
func haveAVX2() bool {
mfi, _, _, _ := cpuid(0)
// Check AVX2, AVX2 requires OS support, but BMI1/2 don't.
if mfi >= 7 && haveAVX() {
_, ebx, _, _ := cpuidex(7, 0)
return (ebx & 0x00000020) != 0
if mfi >= 1 {
_, _, c, d := cpuid(1)
sse = (d & (1 << 25)) != 0
sse2 = (d & (1 << 26)) != 0
sse3 = (c & (1 << 0)) != 0
ssse3 = (c & (1 << 9)) != 0
sse41 = (c & (1 << 19)) != 0
sse42 = (c & (1 << 20)) != 0
popcnt = (c & (1 << 23)) != 0
_xsave = (c & (1 << 26)) != 0
_osxsave = (c & (1 << 27)) != 0
_avx = (c & (1 << 28)) != 0
}
return false
}
// haveAVX512 returns true when there is AVX512 support
func haveAVX512() bool {
mfi, _, _, _ := cpuid(0)
// Check AVX2, AVX2 requires OS support, but BMI1/2 don't.
if mfi >= 7 {
_, _, c, _ := cpuid(1)
_, b, _, _ := cpuid(7)
// Only detect AVX-512 features if XGETBV is supported
if c&((1<<26)|(1<<27)) == (1<<26)|(1<<27) {
// Check for OS support
eax, _ := xgetbv(0)
_, ebx, _, _ := cpuidex(7, 0)
// Verify that XCR0[7:5] = 111b (OPMASK state, upper 256-bit of ZMM0-ZMM15 and
// ZMM16-ZMM31 state are enabled by OS)
/// and that XCR0[2:1] = 11b (XMM state and YMM state are enabled by OS).
if (eax>>5)&7 == 7 && (eax>>1)&3 == 3 {
if ebx&(1<<16) == 0 {
return false // no AVX512F
}
if ebx&(1<<17) == 0 {
return false // no AVX512DQ
}
if ebx&(1<<30) == 0 {
return false // no AVX512BW
}
if ebx&(1<<31) == 0 {
return false // no AVX512VL
}
return true
}
}
_avx2 = (b & (1 << 5)) != 0
_avx512f = (b & (1 << 16)) != 0
_avx512dq = (b & (1 << 17)) != 0
// _avx512pf = (b & (1 << 26)) != 0
// _avx512er = (b & (1 << 27)) != 0
// _avx512cd = (b & (1 << 28)) != 0
_avx512bw = (b & (1 << 30)) != 0
_avx512vl = (b & (1 << 31)) != 0
sha = (b & (1 << 29)) != 0
}
// Stop here if XSAVE unsupported or not enabled
if !_xsave || !_osxsave {
return
}
if _xsave && _osxsave {
a, _ := xgetbv(0)
_sseState = (a & (1 << 1)) != 0
_avxState = (a & (1 << 2)) != 0
_opmaskState = (a & (1 << 5)) != 0
_zmmHI256State = (a & (1 << 6)) != 0
_hi16ZmmState = (a & (1 << 7)) != 0
} else {
_sseState = true
}
// Very unlikely that OS would enable XSAVE and then disable SSE
if !_sseState {
sse = false
sse2 = false
sse3 = false
ssse3 = false
sse41 = false
sse42 = false
}
if _avxState {
avx = _avx
avx2 = _avx2
}
if _opmaskState && _zmmHI256State && _hi16ZmmState {
avx512 = (_avx512f &&
_avx512dq &&
_avx512bw &&
_avx512vl)
}
return false
}
// haveSSSE3 returns true when there is SSSE3 support
func haveSSSE3() bool {
_, _, c, _ := cpuid(1)
return (c & 0x00000200) != 0
}

View File

@ -24,30 +24,30 @@
// func cpuid(op uint32) (eax, ebx, ecx, edx uint32)
TEXT ·cpuid(SB), 7, $0
XORL CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+4(FP)
MOVL BX, ebx+8(FP)
MOVL CX, ecx+12(FP)
MOVL DX, edx+16(FP)
RET
XORL CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+4(FP)
MOVL BX, ebx+8(FP)
MOVL CX, ecx+12(FP)
MOVL DX, edx+16(FP)
RET
// func cpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)
TEXT ·cpuidex(SB), 7, $0
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func xgetbv(index uint32) (eax, edx uint32)
TEXT ·xgetbv(SB), 7, $0
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+4(FP)
MOVL DX, edx+8(FP)
RET
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+4(FP)
MOVL DX, edx+8(FP)
RET

View File

@ -24,31 +24,30 @@
// func cpuid(op uint32) (eax, ebx, ecx, edx uint32)
TEXT ·cpuid(SB), 7, $0
XORQ CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
XORQ CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func cpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)
TEXT ·cpuidex(SB), 7, $0
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func xgetbv(index uint32) (eax, edx uint32)
TEXT ·xgetbv(SB), 7, $0
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+8(FP)
MOVL DX, edx+12(FP)
RET
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+8(FP)
MOVL DX, edx+12(FP)
RET

View File

@ -13,7 +13,7 @@
// limitations under the License.
//
// +build ppc64 ppc64le mips mipsle mips64 mips64le s390x
// +build ppc64 ppc64le mips mipsle mips64 mips64le s390x wasm
package sha256

View File

@ -18,6 +18,7 @@ package sha256
import (
"crypto/sha256"
"encoding/binary"
"hash"
"runtime"
)
@ -29,7 +30,7 @@ const Size = 32
const BlockSize = 64
const (
chunk = 64
chunk = BlockSize
init0 = 0x6A09E667
init1 = 0xBB67AE85
init2 = 0x3C6EF372
@ -62,29 +63,60 @@ func (d *digest) Reset() {
d.len = 0
}
type blockfuncType int
const (
blockfuncGeneric blockfuncType = iota
blockfuncAvx512 blockfuncType = iota
blockfuncAvx2 blockfuncType = iota
blockfuncAvx blockfuncType = iota
blockfuncSsse blockfuncType = iota
blockfuncSha blockfuncType = iota
blockfuncArm blockfuncType = iota
)
var blockfunc blockfuncType
func block(dig *digest, p []byte) {
is386bit := runtime.GOARCH == "386"
isARM := runtime.GOARCH == "arm"
if is386bit || isARM {
if blockfunc == blockfuncSha {
blockShaGo(dig, p)
} else if blockfunc == blockfuncAvx2 {
blockAvx2Go(dig, p)
} else if blockfunc == blockfuncAvx {
blockAvxGo(dig, p)
} else if blockfunc == blockfuncSsse {
blockSsseGo(dig, p)
} else if blockfunc == blockfuncArm {
blockArmGo(dig, p)
} else if blockfunc == blockfuncGeneric {
blockGeneric(dig, p)
}
switch !is386bit && !isARM {
}
func init() {
is386bit := runtime.GOARCH == "386"
isARM := runtime.GOARCH == "arm"
switch {
case is386bit || isARM:
blockfunc = blockfuncGeneric
case sha && ssse3 && sse41:
blockfunc = blockfuncSha
case avx2:
blockAvx2Go(dig, p)
blockfunc = blockfuncAvx2
case avx:
blockAvxGo(dig, p)
blockfunc = blockfuncAvx
case ssse3:
blockSsseGo(dig, p)
blockfunc = blockfuncSsse
case armSha:
blockArmGo(dig, p)
blockfunc = blockfuncArm
default:
blockGeneric(dig, p)
blockfunc = blockfuncGeneric
}
}
// New returns a new hash.Hash computing the SHA256 checksum.
func New() hash.Hash {
if avx2 || avx || ssse3 || armSha {
if blockfunc != blockfuncGeneric {
d := new(digest)
d.Reset()
return d
@ -95,11 +127,12 @@ func New() hash.Hash {
}
// Sum256 - single caller sha256 helper
func Sum256(data []byte) [Size]byte {
func Sum256(data []byte) (result [Size]byte) {
var d digest
d.Reset()
d.Write(data)
return d.checkSum()
result = d.checkSum()
return
}
// Return size of checksum
@ -141,37 +174,119 @@ func (d *digest) Sum(in []byte) []byte {
}
// Intermediate checksum function
func (d *digest) checkSum() [Size]byte {
len := d.len
// Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
var tmp [64]byte
tmp[0] = 0x80
if len%64 < 56 {
d.Write(tmp[0 : 56-len%64])
} else {
d.Write(tmp[0 : 64+56-len%64])
func (d *digest) checkSum() (digest [Size]byte) {
n := d.nx
var k [64]byte
copy(k[:], d.x[:n])
k[n] = 0x80
if n >= 56 {
block(d, k[:])
// clear block buffer - go compiles this to optimal 1x xorps + 4x movups
// unfortunately expressing this more succinctly results in much worse code
k[0] = 0
k[1] = 0
k[2] = 0
k[3] = 0
k[4] = 0
k[5] = 0
k[6] = 0
k[7] = 0
k[8] = 0
k[9] = 0
k[10] = 0
k[11] = 0
k[12] = 0
k[13] = 0
k[14] = 0
k[15] = 0
k[16] = 0
k[17] = 0
k[18] = 0
k[19] = 0
k[20] = 0
k[21] = 0
k[22] = 0
k[23] = 0
k[24] = 0
k[25] = 0
k[26] = 0
k[27] = 0
k[28] = 0
k[29] = 0
k[30] = 0
k[31] = 0
k[32] = 0
k[33] = 0
k[34] = 0
k[35] = 0
k[36] = 0
k[37] = 0
k[38] = 0
k[39] = 0
k[40] = 0
k[41] = 0
k[42] = 0
k[43] = 0
k[44] = 0
k[45] = 0
k[46] = 0
k[47] = 0
k[48] = 0
k[49] = 0
k[50] = 0
k[51] = 0
k[52] = 0
k[53] = 0
k[54] = 0
k[55] = 0
k[56] = 0
k[57] = 0
k[58] = 0
k[59] = 0
k[60] = 0
k[61] = 0
k[62] = 0
k[63] = 0
}
binary.BigEndian.PutUint64(k[56:64], uint64(d.len)<<3)
block(d, k[:])
{
const i = 0
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 1
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 2
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 3
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 4
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 5
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 6
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
{
const i = 7
binary.BigEndian.PutUint32(digest[i*4:i*4+4], d.h[i])
}
// Length in bits.
len <<= 3
for i := uint(0); i < 8; i++ {
tmp[i] = byte(len >> (56 - 8*i))
}
d.Write(tmp[0:8])
if d.nx != 0 {
panic("d.nx != 0")
}
h := d.h[:]
var digest [Size]byte
for i, s := range h {
digest[i*4] = byte(s >> 24)
digest[i*4+1] = byte(s >> 16)
digest[i*4+2] = byte(s >> 8)
digest[i*4+3] = byte(s)
}
return digest
return
}

File diff suppressed because it is too large Load Diff

View File

@ -1,686 +0,0 @@
// 16x Parallel implementation of SHA256 for AVX512
//
// Minio Cloud Storage, (C) 2017 Minio, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// This code is based on the Intel Multi-Buffer Crypto for IPSec library
// and more specifically the following implementation:
// https://github.com/intel/intel-ipsec-mb/blob/master/avx512/sha256_x16_avx512.asm
//
// For Golang it has been converted into Plan 9 assembly with the help of
// github.com/minio/asm2plan9s to assemble the AVX512 instructions
//
// Copyright (c) 2017, Intel Corporation
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
// * Neither the name of Intel Corporation nor the names of its contributors
// may be used to endorse or promote products derived from this software
// without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#define SHA256_DIGEST_ROW_SIZE 64
// arg1
#define STATE rdi
#define STATE_P9 DI
// arg2
#define INP_SIZE rsi
#define INP_SIZE_P9 SI
#define IDX rcx
#define TBL rdx
#define TBL_P9 DX
#define INPUT rax
#define INPUT_P9 AX
#define inp0 r9
#define SCRATCH_P9 R12
#define SCRATCH r12
#define maskp r13
#define MASKP_P9 R13
#define mask r14
#define MASK_P9 R14
#define A zmm0
#define B zmm1
#define C zmm2
#define D zmm3
#define E zmm4
#define F zmm5
#define G zmm6
#define H zmm7
#define T1 zmm8
#define TMP0 zmm9
#define TMP1 zmm10
#define TMP2 zmm11
#define TMP3 zmm12
#define TMP4 zmm13
#define TMP5 zmm14
#define TMP6 zmm15
#define W0 zmm16
#define W1 zmm17
#define W2 zmm18
#define W3 zmm19
#define W4 zmm20
#define W5 zmm21
#define W6 zmm22
#define W7 zmm23
#define W8 zmm24
#define W9 zmm25
#define W10 zmm26
#define W11 zmm27
#define W12 zmm28
#define W13 zmm29
#define W14 zmm30
#define W15 zmm31
#define TRANSPOSE16(_r0, _r1, _r2, _r3, _r4, _r5, _r6, _r7, _r8, _r9, _r10, _r11, _r12, _r13, _r14, _r15, _t0, _t1) \
\
\ // input r0 = {a15 a14 a13 a12 a11 a10 a9 a8 a7 a6 a5 a4 a3 a2 a1 a0}
\ // r1 = {b15 b14 b13 b12 b11 b10 b9 b8 b7 b6 b5 b4 b3 b2 b1 b0}
\ // r2 = {c15 c14 c13 c12 c11 c10 c9 c8 c7 c6 c5 c4 c3 c2 c1 c0}
\ // r3 = {d15 d14 d13 d12 d11 d10 d9 d8 d7 d6 d5 d4 d3 d2 d1 d0}
\ // r4 = {e15 e14 e13 e12 e11 e10 e9 e8 e7 e6 e5 e4 e3 e2 e1 e0}
\ // r5 = {f15 f14 f13 f12 f11 f10 f9 f8 f7 f6 f5 f4 f3 f2 f1 f0}
\ // r6 = {g15 g14 g13 g12 g11 g10 g9 g8 g7 g6 g5 g4 g3 g2 g1 g0}
\ // r7 = {h15 h14 h13 h12 h11 h10 h9 h8 h7 h6 h5 h4 h3 h2 h1 h0}
\ // r8 = {i15 i14 i13 i12 i11 i10 i9 i8 i7 i6 i5 i4 i3 i2 i1 i0}
\ // r9 = {j15 j14 j13 j12 j11 j10 j9 j8 j7 j6 j5 j4 j3 j2 j1 j0}
\ // r10 = {k15 k14 k13 k12 k11 k10 k9 k8 k7 k6 k5 k4 k3 k2 k1 k0}
\ // r11 = {l15 l14 l13 l12 l11 l10 l9 l8 l7 l6 l5 l4 l3 l2 l1 l0}
\ // r12 = {m15 m14 m13 m12 m11 m10 m9 m8 m7 m6 m5 m4 m3 m2 m1 m0}
\ // r13 = {n15 n14 n13 n12 n11 n10 n9 n8 n7 n6 n5 n4 n3 n2 n1 n0}
\ // r14 = {o15 o14 o13 o12 o11 o10 o9 o8 o7 o6 o5 o4 o3 o2 o1 o0}
\ // r15 = {p15 p14 p13 p12 p11 p10 p9 p8 p7 p6 p5 p4 p3 p2 p1 p0}
\
\ // output r0 = { p0 o0 n0 m0 l0 k0 j0 i0 h0 g0 f0 e0 d0 c0 b0 a0}
\ // r1 = { p1 o1 n1 m1 l1 k1 j1 i1 h1 g1 f1 e1 d1 c1 b1 a1}
\ // r2 = { p2 o2 n2 m2 l2 k2 j2 i2 h2 g2 f2 e2 d2 c2 b2 a2}
\ // r3 = { p3 o3 n3 m3 l3 k3 j3 i3 h3 g3 f3 e3 d3 c3 b3 a3}
\ // r4 = { p4 o4 n4 m4 l4 k4 j4 i4 h4 g4 f4 e4 d4 c4 b4 a4}
\ // r5 = { p5 o5 n5 m5 l5 k5 j5 i5 h5 g5 f5 e5 d5 c5 b5 a5}
\ // r6 = { p6 o6 n6 m6 l6 k6 j6 i6 h6 g6 f6 e6 d6 c6 b6 a6}
\ // r7 = { p7 o7 n7 m7 l7 k7 j7 i7 h7 g7 f7 e7 d7 c7 b7 a7}
\ // r8 = { p8 o8 n8 m8 l8 k8 j8 i8 h8 g8 f8 e8 d8 c8 b8 a8}
\ // r9 = { p9 o9 n9 m9 l9 k9 j9 i9 h9 g9 f9 e9 d9 c9 b9 a9}
\ // r10 = {p10 o10 n10 m10 l10 k10 j10 i10 h10 g10 f10 e10 d10 c10 b10 a10}
\ // r11 = {p11 o11 n11 m11 l11 k11 j11 i11 h11 g11 f11 e11 d11 c11 b11 a11}
\ // r12 = {p12 o12 n12 m12 l12 k12 j12 i12 h12 g12 f12 e12 d12 c12 b12 a12}
\ // r13 = {p13 o13 n13 m13 l13 k13 j13 i13 h13 g13 f13 e13 d13 c13 b13 a13}
\ // r14 = {p14 o14 n14 m14 l14 k14 j14 i14 h14 g14 f14 e14 d14 c14 b14 a14}
\ // r15 = {p15 o15 n15 m15 l15 k15 j15 i15 h15 g15 f15 e15 d15 c15 b15 a15}
\
\ // process top half
vshufps _t0, _r0, _r1, 0x44 \ // t0 = {b13 b12 a13 a12 b9 b8 a9 a8 b5 b4 a5 a4 b1 b0 a1 a0}
vshufps _r0, _r0, _r1, 0xEE \ // r0 = {b15 b14 a15 a14 b11 b10 a11 a10 b7 b6 a7 a6 b3 b2 a3 a2}
vshufps _t1, _r2, _r3, 0x44 \ // t1 = {d13 d12 c13 c12 d9 d8 c9 c8 d5 d4 c5 c4 d1 d0 c1 c0}
vshufps _r2, _r2, _r3, 0xEE \ // r2 = {d15 d14 c15 c14 d11 d10 c11 c10 d7 d6 c7 c6 d3 d2 c3 c2}
\
vshufps _r3, _t0, _t1, 0xDD \ // r3 = {d13 c13 b13 a13 d9 c9 b9 a9 d5 c5 b5 a5 d1 c1 b1 a1}
vshufps _r1, _r0, _r2, 0x88 \ // r1 = {d14 c14 b14 a14 d10 c10 b10 a10 d6 c6 b6 a6 d2 c2 b2 a2}
vshufps _r0, _r0, _r2, 0xDD \ // r0 = {d15 c15 b15 a15 d11 c11 b11 a11 d7 c7 b7 a7 d3 c3 b3 a3}
vshufps _t0, _t0, _t1, 0x88 \ // t0 = {d12 c12 b12 a12 d8 c8 b8 a8 d4 c4 b4 a4 d0 c0 b0 a0}
\
\ // use r2 in place of t0
vshufps _r2, _r4, _r5, 0x44 \ // r2 = {f13 f12 e13 e12 f9 f8 e9 e8 f5 f4 e5 e4 f1 f0 e1 e0}
vshufps _r4, _r4, _r5, 0xEE \ // r4 = {f15 f14 e15 e14 f11 f10 e11 e10 f7 f6 e7 e6 f3 f2 e3 e2}
vshufps _t1, _r6, _r7, 0x44 \ // t1 = {h13 h12 g13 g12 h9 h8 g9 g8 h5 h4 g5 g4 h1 h0 g1 g0}
vshufps _r6, _r6, _r7, 0xEE \ // r6 = {h15 h14 g15 g14 h11 h10 g11 g10 h7 h6 g7 g6 h3 h2 g3 g2}
\
vshufps _r7, _r2, _t1, 0xDD \ // r7 = {h13 g13 f13 e13 h9 g9 f9 e9 h5 g5 f5 e5 h1 g1 f1 e1}
vshufps _r5, _r4, _r6, 0x88 \ // r5 = {h14 g14 f14 e14 h10 g10 f10 e10 h6 g6 f6 e6 h2 g2 f2 e2}
vshufps _r4, _r4, _r6, 0xDD \ // r4 = {h15 g15 f15 e15 h11 g11 f11 e11 h7 g7 f7 e7 h3 g3 f3 e3}
vshufps _r2, _r2, _t1, 0x88 \ // r2 = {h12 g12 f12 e12 h8 g8 f8 e8 h4 g4 f4 e4 h0 g0 f0 e0}
\
\ // use r6 in place of t0
vshufps _r6, _r8, _r9, 0x44 \ // r6 = {j13 j12 i13 i12 j9 j8 i9 i8 j5 j4 i5 i4 j1 j0 i1 i0}
vshufps _r8, _r8, _r9, 0xEE \ // r8 = {j15 j14 i15 i14 j11 j10 i11 i10 j7 j6 i7 i6 j3 j2 i3 i2}
vshufps _t1, _r10, _r11, 0x44 \ // t1 = {l13 l12 k13 k12 l9 l8 k9 k8 l5 l4 k5 k4 l1 l0 k1 k0}
vshufps _r10, _r10, _r11, 0xEE \ // r10 = {l15 l14 k15 k14 l11 l10 k11 k10 l7 l6 k7 k6 l3 l2 k3 k2}
\
vshufps _r11, _r6, _t1, 0xDD \ // r11 = {l13 k13 j13 113 l9 k9 j9 i9 l5 k5 j5 i5 l1 k1 j1 i1}
vshufps _r9, _r8, _r10, 0x88 \ // r9 = {l14 k14 j14 114 l10 k10 j10 i10 l6 k6 j6 i6 l2 k2 j2 i2}
vshufps _r8, _r8, _r10, 0xDD \ // r8 = {l15 k15 j15 115 l11 k11 j11 i11 l7 k7 j7 i7 l3 k3 j3 i3}
vshufps _r6, _r6, _t1, 0x88 \ // r6 = {l12 k12 j12 112 l8 k8 j8 i8 l4 k4 j4 i4 l0 k0 j0 i0}
\
\ // use r10 in place of t0
vshufps _r10, _r12, _r13, 0x44 \ // r10 = {n13 n12 m13 m12 n9 n8 m9 m8 n5 n4 m5 m4 n1 n0 a1 m0}
vshufps _r12, _r12, _r13, 0xEE \ // r12 = {n15 n14 m15 m14 n11 n10 m11 m10 n7 n6 m7 m6 n3 n2 a3 m2}
vshufps _t1, _r14, _r15, 0x44 \ // t1 = {p13 p12 013 012 p9 p8 09 08 p5 p4 05 04 p1 p0 01 00}
vshufps _r14, _r14, _r15, 0xEE \ // r14 = {p15 p14 015 014 p11 p10 011 010 p7 p6 07 06 p3 p2 03 02}
\
vshufps _r15, _r10, _t1, 0xDD \ // r15 = {p13 013 n13 m13 p9 09 n9 m9 p5 05 n5 m5 p1 01 n1 m1}
vshufps _r13, _r12, _r14, 0x88 \ // r13 = {p14 014 n14 m14 p10 010 n10 m10 p6 06 n6 m6 p2 02 n2 m2}
vshufps _r12, _r12, _r14, 0xDD \ // r12 = {p15 015 n15 m15 p11 011 n11 m11 p7 07 n7 m7 p3 03 n3 m3}
vshufps _r10, _r10, _t1, 0x88 \ // r10 = {p12 012 n12 m12 p8 08 n8 m8 p4 04 n4 m4 p0 00 n0 m0}
\
\ // At this point, the registers that contain interesting data are:
\ // t0, r3, r1, r0, r2, r7, r5, r4, r6, r11, r9, r8, r10, r15, r13, r12
\ // Can use t1 and r14 as scratch registers
LEAQ PSHUFFLE_TRANSPOSE16_MASK1<>(SB), BX \
LEAQ PSHUFFLE_TRANSPOSE16_MASK2<>(SB), R8 \
\
vmovdqu32 _r14, [rbx] \
vpermi2q _r14, _t0, _r2 \ // r14 = {h8 g8 f8 e8 d8 c8 b8 a8 h0 g0 f0 e0 d0 c0 b0 a0}
vmovdqu32 _t1, [r8] \
vpermi2q _t1, _t0, _r2 \ // t1 = {h12 g12 f12 e12 d12 c12 b12 a12 h4 g4 f4 e4 d4 c4 b4 a4}
\
vmovdqu32 _r2, [rbx] \
vpermi2q _r2, _r3, _r7 \ // r2 = {h9 g9 f9 e9 d9 c9 b9 a9 h1 g1 f1 e1 d1 c1 b1 a1}
vmovdqu32 _t0, [r8] \
vpermi2q _t0, _r3, _r7 \ // t0 = {h13 g13 f13 e13 d13 c13 b13 a13 h5 g5 f5 e5 d5 c5 b5 a5}
\
vmovdqu32 _r3, [rbx] \
vpermi2q _r3, _r1, _r5 \ // r3 = {h10 g10 f10 e10 d10 c10 b10 a10 h2 g2 f2 e2 d2 c2 b2 a2}
vmovdqu32 _r7, [r8] \
vpermi2q _r7, _r1, _r5 \ // r7 = {h14 g14 f14 e14 d14 c14 b14 a14 h6 g6 f6 e6 d6 c6 b6 a6}
\
vmovdqu32 _r1, [rbx] \
vpermi2q _r1, _r0, _r4 \ // r1 = {h11 g11 f11 e11 d11 c11 b11 a11 h3 g3 f3 e3 d3 c3 b3 a3}
vmovdqu32 _r5, [r8] \
vpermi2q _r5, _r0, _r4 \ // r5 = {h15 g15 f15 e15 d15 c15 b15 a15 h7 g7 f7 e7 d7 c7 b7 a7}
\
vmovdqu32 _r0, [rbx] \
vpermi2q _r0, _r6, _r10 \ // r0 = {p8 o8 n8 m8 l8 k8 j8 i8 p0 o0 n0 m0 l0 k0 j0 i0}
vmovdqu32 _r4, [r8] \
vpermi2q _r4, _r6, _r10 \ // r4 = {p12 o12 n12 m12 l12 k12 j12 i12 p4 o4 n4 m4 l4 k4 j4 i4}
\
vmovdqu32 _r6, [rbx] \
vpermi2q _r6, _r11, _r15 \ // r6 = {p9 o9 n9 m9 l9 k9 j9 i9 p1 o1 n1 m1 l1 k1 j1 i1}
vmovdqu32 _r10, [r8] \
vpermi2q _r10, _r11, _r15 \ // r10 = {p13 o13 n13 m13 l13 k13 j13 i13 p5 o5 n5 m5 l5 k5 j5 i5}
\
vmovdqu32 _r11, [rbx] \
vpermi2q _r11, _r9, _r13 \ // r11 = {p10 o10 n10 m10 l10 k10 j10 i10 p2 o2 n2 m2 l2 k2 j2 i2}
vmovdqu32 _r15, [r8] \
vpermi2q _r15, _r9, _r13 \ // r15 = {p14 o14 n14 m14 l14 k14 j14 i14 p6 o6 n6 m6 l6 k6 j6 i6}
\
vmovdqu32 _r9, [rbx] \
vpermi2q _r9, _r8, _r12 \ // r9 = {p11 o11 n11 m11 l11 k11 j11 i11 p3 o3 n3 m3 l3 k3 j3 i3}
vmovdqu32 _r13, [r8] \
vpermi2q _r13, _r8, _r12 \ // r13 = {p15 o15 n15 m15 l15 k15 j15 i15 p7 o7 n7 m7 l7 k7 j7 i7}
\
\ // At this point r8 and r12 can be used as scratch registers
vshuff64x2 _r8, _r14, _r0, 0xEE \ // r8 = {p8 o8 n8 m8 l8 k8 j8 i8 h8 g8 f8 e8 d8 c8 b8 a8}
vshuff64x2 _r0, _r14, _r0, 0x44 \ // r0 = {p0 o0 n0 m0 l0 k0 j0 i0 h0 g0 f0 e0 d0 c0 b0 a0}
\
vshuff64x2 _r12, _t1, _r4, 0xEE \ // r12 = {p12 o12 n12 m12 l12 k12 j12 i12 h12 g12 f12 e12 d12 c12 b12 a12}
vshuff64x2 _r4, _t1, _r4, 0x44 \ // r4 = {p4 o4 n4 m4 l4 k4 j4 i4 h4 g4 f4 e4 d4 c4 b4 a4}
\
vshuff64x2 _r14, _r7, _r15, 0xEE \ // r14 = {p14 o14 n14 m14 l14 k14 j14 i14 h14 g14 f14 e14 d14 c14 b14 a14}
vshuff64x2 _t1, _r7, _r15, 0x44 \ // t1 = {p6 o6 n6 m6 l6 k6 j6 i6 h6 g6 f6 e6 d6 c6 b6 a6}
\
vshuff64x2 _r15, _r5, _r13, 0xEE \ // r15 = {p15 o15 n15 m15 l15 k15 j15 i15 h15 g15 f15 e15 d15 c15 b15 a15}
vshuff64x2 _r7, _r5, _r13, 0x44 \ // r7 = {p7 o7 n7 m7 l7 k7 j7 i7 h7 g7 f7 e7 d7 c7 b7 a7}
\
vshuff64x2 _r13, _t0, _r10, 0xEE \ // r13 = {p13 o13 n13 m13 l13 k13 j13 i13 h13 g13 f13 e13 d13 c13 b13 a13}
vshuff64x2 _r5, _t0, _r10, 0x44 \ // r5 = {p5 o5 n5 m5 l5 k5 j5 i5 h5 g5 f5 e5 d5 c5 b5 a5}
\
vshuff64x2 _r10, _r3, _r11, 0xEE \ // r10 = {p10 o10 n10 m10 l10 k10 j10 i10 h10 g10 f10 e10 d10 c10 b10 a10}
vshuff64x2 _t0, _r3, _r11, 0x44 \ // t0 = {p2 o2 n2 m2 l2 k2 j2 i2 h2 g2 f2 e2 d2 c2 b2 a2}
\
vshuff64x2 _r11, _r1, _r9, 0xEE \ // r11 = {p11 o11 n11 m11 l11 k11 j11 i11 h11 g11 f11 e11 d11 c11 b11 a11}
vshuff64x2 _r3, _r1, _r9, 0x44 \ // r3 = {p3 o3 n3 m3 l3 k3 j3 i3 h3 g3 f3 e3 d3 c3 b3 a3}
\
vshuff64x2 _r9, _r2, _r6, 0xEE \ // r9 = {p9 o9 n9 m9 l9 k9 j9 i9 h9 g9 f9 e9 d9 c9 b9 a9}
vshuff64x2 _r1, _r2, _r6, 0x44 \ // r1 = {p1 o1 n1 m1 l1 k1 j1 i1 h1 g1 f1 e1 d1 c1 b1 a1}
\
vmovdqu32 _r2, _t0 \ // r2 = {p2 o2 n2 m2 l2 k2 j2 i2 h2 g2 f2 e2 d2 c2 b2 a2}
vmovdqu32 _r6, _t1 \ // r6 = {p6 o6 n6 m6 l6 k6 j6 i6 h6 g6 f6 e6 d6 c6 b6 a6}
// CH(A, B, C) = (A&B) ^ (~A&C)
// MAJ(E, F, G) = (E&F) ^ (E&G) ^ (F&G)
// SIGMA0 = ROR_2 ^ ROR_13 ^ ROR_22
// SIGMA1 = ROR_6 ^ ROR_11 ^ ROR_25
// sigma0 = ROR_7 ^ ROR_18 ^ SHR_3
// sigma1 = ROR_17 ^ ROR_19 ^ SHR_10
// Main processing loop per round
#define PROCESS_LOOP(_WT, _ROUND, _A, _B, _C, _D, _E, _F, _G, _H) \
\ // T1 = H + SIGMA1(E) + CH(E, F, G) + Kt + Wt
\ // T2 = SIGMA0(A) + MAJ(A, B, C)
\ // H=G, G=F, F=E, E=D+T1, D=C, C=B, B=A, A=T1+T2
\
\ // H becomes T2, then add T1 for A
\ // D becomes D + T1 for E
\
vpaddd T1, _H, TMP3 \ // T1 = H + Kt
vmovdqu32 TMP0, _E \
vprord TMP1, _E, 6 \ // ROR_6(E)
vprord TMP2, _E, 11 \ // ROR_11(E)
vprord TMP3, _E, 25 \ // ROR_25(E)
vpternlogd TMP0, _F, _G, 0xCA \ // TMP0 = CH(E,F,G)
vpaddd T1, T1, _WT \ // T1 = T1 + Wt
vpternlogd TMP1, TMP2, TMP3, 0x96 \ // TMP1 = SIGMA1(E)
vpaddd T1, T1, TMP0 \ // T1 = T1 + CH(E,F,G)
vpaddd T1, T1, TMP1 \ // T1 = T1 + SIGMA1(E)
vpaddd _D, _D, T1 \ // D = D + T1
\
vprord _H, _A, 2 \ // ROR_2(A)
vprord TMP2, _A, 13 \ // ROR_13(A)
vprord TMP3, _A, 22 \ // ROR_22(A)
vmovdqu32 TMP0, _A \
vpternlogd TMP0, _B, _C, 0xE8 \ // TMP0 = MAJ(A,B,C)
vpternlogd _H, TMP2, TMP3, 0x96 \ // H(T2) = SIGMA0(A)
vpaddd _H, _H, TMP0 \ // H(T2) = SIGMA0(A) + MAJ(A,B,C)
vpaddd _H, _H, T1 \ // H(A) = H(T2) + T1
\
vmovdqu32 TMP3, [TBL + ((_ROUND+1)*64)] \ // Next Kt
#define MSG_SCHED_ROUND_16_63(_WT, _WTp1, _WTp9, _WTp14) \
vprord TMP4, _WTp14, 17 \ // ROR_17(Wt-2)
vprord TMP5, _WTp14, 19 \ // ROR_19(Wt-2)
vpsrld TMP6, _WTp14, 10 \ // SHR_10(Wt-2)
vpternlogd TMP4, TMP5, TMP6, 0x96 \ // TMP4 = sigma1(Wt-2)
\
vpaddd _WT, _WT, TMP4 \ // Wt = Wt-16 + sigma1(Wt-2)
vpaddd _WT, _WT, _WTp9 \ // Wt = Wt-16 + sigma1(Wt-2) + Wt-7
\
vprord TMP4, _WTp1, 7 \ // ROR_7(Wt-15)
vprord TMP5, _WTp1, 18 \ // ROR_18(Wt-15)
vpsrld TMP6, _WTp1, 3 \ // SHR_3(Wt-15)
vpternlogd TMP4, TMP5, TMP6, 0x96 \ // TMP4 = sigma0(Wt-15)
\
vpaddd _WT, _WT, TMP4 \ // Wt = Wt-16 + sigma1(Wt-2) +
\ // Wt-7 + sigma0(Wt-15) +
// Note this is reading in a block of data for one lane
// When all 16 are read, the data must be transposed to build msg schedule
#define MSG_SCHED_ROUND_00_15(_WT, OFFSET, LABEL) \
TESTQ $(1<<OFFSET), MASK_P9 \
JE LABEL \
MOVQ OFFSET*24(INPUT_P9), R9 \
vmovups _WT, [inp0+IDX] \
LABEL: \
#define MASKED_LOAD(_WT, OFFSET, LABEL) \
TESTQ $(1<<OFFSET), MASK_P9 \
JE LABEL \
MOVQ OFFSET*24(INPUT_P9), R9 \
vmovups _WT,[inp0+IDX] \
LABEL: \
TEXT ·sha256_x16_avx512(SB), 7, $0
MOVQ digests+0(FP), STATE_P9 //
MOVQ scratch+8(FP), SCRATCH_P9
MOVQ mask_len+32(FP), INP_SIZE_P9 // number of blocks to process
MOVQ mask+24(FP), MASKP_P9
MOVQ (MASKP_P9), MASK_P9
kmovq k1, mask
LEAQ inputs+48(FP), INPUT_P9
// Initialize digests
vmovdqu32 A, [STATE + 0*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 B, [STATE + 1*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 C, [STATE + 2*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 D, [STATE + 3*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 E, [STATE + 4*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 F, [STATE + 5*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 G, [STATE + 6*SHA256_DIGEST_ROW_SIZE]
vmovdqu32 H, [STATE + 7*SHA256_DIGEST_ROW_SIZE]
MOVQ table+16(FP), TBL_P9
xor IDX, IDX
// Read in first block of input data
MASKED_LOAD( W0, 0, skipInput0)
MASKED_LOAD( W1, 1, skipInput1)
MASKED_LOAD( W2, 2, skipInput2)
MASKED_LOAD( W3, 3, skipInput3)
MASKED_LOAD( W4, 4, skipInput4)
MASKED_LOAD( W5, 5, skipInput5)
MASKED_LOAD( W6, 6, skipInput6)
MASKED_LOAD( W7, 7, skipInput7)
MASKED_LOAD( W8, 8, skipInput8)
MASKED_LOAD( W9, 9, skipInput9)
MASKED_LOAD(W10, 10, skipInput10)
MASKED_LOAD(W11, 11, skipInput11)
MASKED_LOAD(W12, 12, skipInput12)
MASKED_LOAD(W13, 13, skipInput13)
MASKED_LOAD(W14, 14, skipInput14)
MASKED_LOAD(W15, 15, skipInput15)
lloop:
LEAQ PSHUFFLE_BYTE_FLIP_MASK<>(SB), TBL_P9
vmovdqu32 TMP2, [TBL]
// Get first K from table
MOVQ table+16(FP), TBL_P9
vmovdqu32 TMP3, [TBL]
// Save digests for later addition
vmovdqu32 [SCRATCH + 64*0], A
vmovdqu32 [SCRATCH + 64*1], B
vmovdqu32 [SCRATCH + 64*2], C
vmovdqu32 [SCRATCH + 64*3], D
vmovdqu32 [SCRATCH + 64*4], E
vmovdqu32 [SCRATCH + 64*5], F
vmovdqu32 [SCRATCH + 64*6], G
vmovdqu32 [SCRATCH + 64*7], H
add IDX, 64
// Transpose input data
TRANSPOSE16(W0, W1, W2, W3, W4, W5, W6, W7, W8, W9, W10, W11, W12, W13, W14, W15, TMP0, TMP1)
vpshufb W0, W0, TMP2
vpshufb W1, W1, TMP2
vpshufb W2, W2, TMP2
vpshufb W3, W3, TMP2
vpshufb W4, W4, TMP2
vpshufb W5, W5, TMP2
vpshufb W6, W6, TMP2
vpshufb W7, W7, TMP2
vpshufb W8, W8, TMP2
vpshufb W9, W9, TMP2
vpshufb W10, W10, TMP2
vpshufb W11, W11, TMP2
vpshufb W12, W12, TMP2
vpshufb W13, W13, TMP2
vpshufb W14, W14, TMP2
vpshufb W15, W15, TMP2
// MSG Schedule for W0-W15 is now complete in registers
// Process first 48 rounds
// Calculate next Wt+16 after processing is complete and Wt is unneeded
PROCESS_LOOP( W0, 0, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W0, W1, W9, W14)
PROCESS_LOOP( W1, 1, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W1, W2, W10, W15)
PROCESS_LOOP( W2, 2, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63( W2, W3, W11, W0)
PROCESS_LOOP( W3, 3, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63( W3, W4, W12, W1)
PROCESS_LOOP( W4, 4, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63( W4, W5, W13, W2)
PROCESS_LOOP( W5, 5, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63( W5, W6, W14, W3)
PROCESS_LOOP( W6, 6, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63( W6, W7, W15, W4)
PROCESS_LOOP( W7, 7, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63( W7, W8, W0, W5)
PROCESS_LOOP( W8, 8, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W8, W9, W1, W6)
PROCESS_LOOP( W9, 9, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W9, W10, W2, W7)
PROCESS_LOOP(W10, 10, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63(W10, W11, W3, W8)
PROCESS_LOOP(W11, 11, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63(W11, W12, W4, W9)
PROCESS_LOOP(W12, 12, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63(W12, W13, W5, W10)
PROCESS_LOOP(W13, 13, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63(W13, W14, W6, W11)
PROCESS_LOOP(W14, 14, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63(W14, W15, W7, W12)
PROCESS_LOOP(W15, 15, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63(W15, W0, W8, W13)
PROCESS_LOOP( W0, 16, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W0, W1, W9, W14)
PROCESS_LOOP( W1, 17, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W1, W2, W10, W15)
PROCESS_LOOP( W2, 18, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63( W2, W3, W11, W0)
PROCESS_LOOP( W3, 19, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63( W3, W4, W12, W1)
PROCESS_LOOP( W4, 20, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63( W4, W5, W13, W2)
PROCESS_LOOP( W5, 21, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63( W5, W6, W14, W3)
PROCESS_LOOP( W6, 22, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63( W6, W7, W15, W4)
PROCESS_LOOP( W7, 23, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63( W7, W8, W0, W5)
PROCESS_LOOP( W8, 24, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W8, W9, W1, W6)
PROCESS_LOOP( W9, 25, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W9, W10, W2, W7)
PROCESS_LOOP(W10, 26, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63(W10, W11, W3, W8)
PROCESS_LOOP(W11, 27, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63(W11, W12, W4, W9)
PROCESS_LOOP(W12, 28, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63(W12, W13, W5, W10)
PROCESS_LOOP(W13, 29, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63(W13, W14, W6, W11)
PROCESS_LOOP(W14, 30, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63(W14, W15, W7, W12)
PROCESS_LOOP(W15, 31, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63(W15, W0, W8, W13)
PROCESS_LOOP( W0, 32, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W0, W1, W9, W14)
PROCESS_LOOP( W1, 33, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W1, W2, W10, W15)
PROCESS_LOOP( W2, 34, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63( W2, W3, W11, W0)
PROCESS_LOOP( W3, 35, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63( W3, W4, W12, W1)
PROCESS_LOOP( W4, 36, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63( W4, W5, W13, W2)
PROCESS_LOOP( W5, 37, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63( W5, W6, W14, W3)
PROCESS_LOOP( W6, 38, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63( W6, W7, W15, W4)
PROCESS_LOOP( W7, 39, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63( W7, W8, W0, W5)
PROCESS_LOOP( W8, 40, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_16_63( W8, W9, W1, W6)
PROCESS_LOOP( W9, 41, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_16_63( W9, W10, W2, W7)
PROCESS_LOOP(W10, 42, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_16_63(W10, W11, W3, W8)
PROCESS_LOOP(W11, 43, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_16_63(W11, W12, W4, W9)
PROCESS_LOOP(W12, 44, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_16_63(W12, W13, W5, W10)
PROCESS_LOOP(W13, 45, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_16_63(W13, W14, W6, W11)
PROCESS_LOOP(W14, 46, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_16_63(W14, W15, W7, W12)
PROCESS_LOOP(W15, 47, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_16_63(W15, W0, W8, W13)
// Check if this is the last block
sub INP_SIZE, 1
JE lastLoop
// Load next mask for inputs
ADDQ $8, MASKP_P9
MOVQ (MASKP_P9), MASK_P9
// Process last 16 rounds
// Read in next block msg data for use in first 16 words of msg sched
PROCESS_LOOP( W0, 48, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_00_15( W0, 0, skipNext0)
PROCESS_LOOP( W1, 49, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_00_15( W1, 1, skipNext1)
PROCESS_LOOP( W2, 50, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_00_15( W2, 2, skipNext2)
PROCESS_LOOP( W3, 51, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_00_15( W3, 3, skipNext3)
PROCESS_LOOP( W4, 52, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_00_15( W4, 4, skipNext4)
PROCESS_LOOP( W5, 53, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_00_15( W5, 5, skipNext5)
PROCESS_LOOP( W6, 54, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_00_15( W6, 6, skipNext6)
PROCESS_LOOP( W7, 55, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_00_15( W7, 7, skipNext7)
PROCESS_LOOP( W8, 56, A, B, C, D, E, F, G, H)
MSG_SCHED_ROUND_00_15( W8, 8, skipNext8)
PROCESS_LOOP( W9, 57, H, A, B, C, D, E, F, G)
MSG_SCHED_ROUND_00_15( W9, 9, skipNext9)
PROCESS_LOOP(W10, 58, G, H, A, B, C, D, E, F)
MSG_SCHED_ROUND_00_15(W10, 10, skipNext10)
PROCESS_LOOP(W11, 59, F, G, H, A, B, C, D, E)
MSG_SCHED_ROUND_00_15(W11, 11, skipNext11)
PROCESS_LOOP(W12, 60, E, F, G, H, A, B, C, D)
MSG_SCHED_ROUND_00_15(W12, 12, skipNext12)
PROCESS_LOOP(W13, 61, D, E, F, G, H, A, B, C)
MSG_SCHED_ROUND_00_15(W13, 13, skipNext13)
PROCESS_LOOP(W14, 62, C, D, E, F, G, H, A, B)
MSG_SCHED_ROUND_00_15(W14, 14, skipNext14)
PROCESS_LOOP(W15, 63, B, C, D, E, F, G, H, A)
MSG_SCHED_ROUND_00_15(W15, 15, skipNext15)
// Add old digest
vmovdqu32 TMP2, A
vmovdqu32 A, [SCRATCH + 64*0]
vpaddd A{k1}, A, TMP2
vmovdqu32 TMP2, B
vmovdqu32 B, [SCRATCH + 64*1]
vpaddd B{k1}, B, TMP2
vmovdqu32 TMP2, C
vmovdqu32 C, [SCRATCH + 64*2]
vpaddd C{k1}, C, TMP2
vmovdqu32 TMP2, D
vmovdqu32 D, [SCRATCH + 64*3]
vpaddd D{k1}, D, TMP2
vmovdqu32 TMP2, E
vmovdqu32 E, [SCRATCH + 64*4]
vpaddd E{k1}, E, TMP2
vmovdqu32 TMP2, F
vmovdqu32 F, [SCRATCH + 64*5]
vpaddd F{k1}, F, TMP2
vmovdqu32 TMP2, G
vmovdqu32 G, [SCRATCH + 64*6]
vpaddd G{k1}, G, TMP2
vmovdqu32 TMP2, H
vmovdqu32 H, [SCRATCH + 64*7]
vpaddd H{k1}, H, TMP2
kmovq k1, mask
JMP lloop
lastLoop:
// Process last 16 rounds
PROCESS_LOOP( W0, 48, A, B, C, D, E, F, G, H)
PROCESS_LOOP( W1, 49, H, A, B, C, D, E, F, G)
PROCESS_LOOP( W2, 50, G, H, A, B, C, D, E, F)
PROCESS_LOOP( W3, 51, F, G, H, A, B, C, D, E)
PROCESS_LOOP( W4, 52, E, F, G, H, A, B, C, D)
PROCESS_LOOP( W5, 53, D, E, F, G, H, A, B, C)
PROCESS_LOOP( W6, 54, C, D, E, F, G, H, A, B)
PROCESS_LOOP( W7, 55, B, C, D, E, F, G, H, A)
PROCESS_LOOP( W8, 56, A, B, C, D, E, F, G, H)
PROCESS_LOOP( W9, 57, H, A, B, C, D, E, F, G)
PROCESS_LOOP(W10, 58, G, H, A, B, C, D, E, F)
PROCESS_LOOP(W11, 59, F, G, H, A, B, C, D, E)
PROCESS_LOOP(W12, 60, E, F, G, H, A, B, C, D)
PROCESS_LOOP(W13, 61, D, E, F, G, H, A, B, C)
PROCESS_LOOP(W14, 62, C, D, E, F, G, H, A, B)
PROCESS_LOOP(W15, 63, B, C, D, E, F, G, H, A)
// Add old digest
vmovdqu32 TMP2, A
vmovdqu32 A, [SCRATCH + 64*0]
vpaddd A{k1}, A, TMP2
vmovdqu32 TMP2, B
vmovdqu32 B, [SCRATCH + 64*1]
vpaddd B{k1}, B, TMP2
vmovdqu32 TMP2, C
vmovdqu32 C, [SCRATCH + 64*2]
vpaddd C{k1}, C, TMP2
vmovdqu32 TMP2, D
vmovdqu32 D, [SCRATCH + 64*3]
vpaddd D{k1}, D, TMP2
vmovdqu32 TMP2, E
vmovdqu32 E, [SCRATCH + 64*4]
vpaddd E{k1}, E, TMP2
vmovdqu32 TMP2, F
vmovdqu32 F, [SCRATCH + 64*5]
vpaddd F{k1}, F, TMP2
vmovdqu32 TMP2, G
vmovdqu32 G, [SCRATCH + 64*6]
vpaddd G{k1}, G, TMP2
vmovdqu32 TMP2, H
vmovdqu32 H, [SCRATCH + 64*7]
vpaddd H{k1}, H, TMP2
// Write out digest
vmovdqu32 [STATE + 0*SHA256_DIGEST_ROW_SIZE], A
vmovdqu32 [STATE + 1*SHA256_DIGEST_ROW_SIZE], B
vmovdqu32 [STATE + 2*SHA256_DIGEST_ROW_SIZE], C
vmovdqu32 [STATE + 3*SHA256_DIGEST_ROW_SIZE], D
vmovdqu32 [STATE + 4*SHA256_DIGEST_ROW_SIZE], E
vmovdqu32 [STATE + 5*SHA256_DIGEST_ROW_SIZE], F
vmovdqu32 [STATE + 6*SHA256_DIGEST_ROW_SIZE], G
vmovdqu32 [STATE + 7*SHA256_DIGEST_ROW_SIZE], H
VZEROUPPER
RET
//
// Tables
//
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x000(SB)/8, $0x0405060700010203
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x008(SB)/8, $0x0c0d0e0f08090a0b
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x010(SB)/8, $0x0405060700010203
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x018(SB)/8, $0x0c0d0e0f08090a0b
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x020(SB)/8, $0x0405060700010203
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x028(SB)/8, $0x0c0d0e0f08090a0b
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x030(SB)/8, $0x0405060700010203
DATA PSHUFFLE_BYTE_FLIP_MASK<>+0x038(SB)/8, $0x0c0d0e0f08090a0b
GLOBL PSHUFFLE_BYTE_FLIP_MASK<>(SB), 8, $64
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x000(SB)/8, $0x0000000000000000
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x008(SB)/8, $0x0000000000000001
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x010(SB)/8, $0x0000000000000008
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x018(SB)/8, $0x0000000000000009
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x020(SB)/8, $0x0000000000000004
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x028(SB)/8, $0x0000000000000005
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x030(SB)/8, $0x000000000000000C
DATA PSHUFFLE_TRANSPOSE16_MASK1<>+0x038(SB)/8, $0x000000000000000D
GLOBL PSHUFFLE_TRANSPOSE16_MASK1<>(SB), 8, $64
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x000(SB)/8, $0x0000000000000002
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x008(SB)/8, $0x0000000000000003
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x010(SB)/8, $0x000000000000000A
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x018(SB)/8, $0x000000000000000B
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x020(SB)/8, $0x0000000000000006
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x028(SB)/8, $0x0000000000000007
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x030(SB)/8, $0x000000000000000E
DATA PSHUFFLE_TRANSPOSE16_MASK2<>+0x038(SB)/8, $0x000000000000000F
GLOBL PSHUFFLE_TRANSPOSE16_MASK2<>(SB), 8, $64

View File

@ -28,20 +28,21 @@ import (
)
//go:noescape
func sha256_x16_avx512(digests *[512]byte, scratch *[512]byte, table *[512]uint64, mask []uint64, inputs [16][]byte)
func sha256X16Avx512(digests *[512]byte, scratch *[512]byte, table *[512]uint64, mask []uint64, inputs [16][]byte)
// Do not start at 0 but next multiple of 16 so as to be able to
// Avx512ServerUID - Do not start at 0 but next multiple of 16 so as to be able to
// differentiate with default initialiation value of 0
const Avx512ServerUid = 16
const Avx512ServerUID = 16
var uidCounter uint64
// NewAvx512 - initialize sha256 Avx512 implementation.
func NewAvx512(a512srv *Avx512Server) hash.Hash {
uid := atomic.AddUint64(&uidCounter, 1)
return &Avx512Digest{uid: uid, a512srv: a512srv}
}
// Type for computing SHA256 using AVX51
// Avx512Digest - Type for computing SHA256 using Avx512
type Avx512Digest struct {
uid uint64
a512srv *Avx512Server
@ -52,12 +53,13 @@ type Avx512Digest struct {
result [Size]byte
}
// Return size of checksum
// Size - Return size of checksum
func (d *Avx512Digest) Size() int { return Size }
// Return blocksize of checksum
// BlockSize - Return blocksize of checksum
func (d Avx512Digest) BlockSize() int { return BlockSize }
// Reset - reset sha digest to its initial values
func (d *Avx512Digest) Reset() {
d.a512srv.blocksCh <- blockInput{uid: d.uid, reset: true}
d.nx = 0
@ -69,7 +71,7 @@ func (d *Avx512Digest) Reset() {
func (d *Avx512Digest) Write(p []byte) (nn int, err error) {
if d.final {
return 0, errors.New("Avx512Digest already finalized. Reset first before writing again.")
return 0, errors.New("Avx512Digest already finalized. Reset first before writing again")
}
nn = len(p)
@ -94,7 +96,7 @@ func (d *Avx512Digest) Write(p []byte) (nn int, err error) {
return
}
// Return sha256 sum in bytes
// Sum - Return sha256 sum in bytes
func (d *Avx512Digest) Sum(in []byte) (result []byte) {
if d.final {
@ -262,7 +264,7 @@ var table = [512]uint64{
func blockAvx512(digests *[512]byte, input [16][]byte, mask []uint64) [16][Size]byte {
scratch := [512]byte{}
sha256_x16_avx512(digests, &scratch, &table, mask, input)
sha256X16Avx512(digests, &scratch, &table, mask, input)
output := [16][Size]byte{}
for i := 0; i < 16; i++ {
@ -290,7 +292,7 @@ type blockInput struct {
sumCh chan [Size]byte
}
// Type to implement 16x parallel handling of SHA256 invocations
// Avx512Server - Type to implement 16x parallel handling of SHA256 invocations
type Avx512Server struct {
blocksCh chan blockInput // Input channel
totalIn int // Total number of inputs waiting to be processed
@ -298,14 +300,14 @@ type Avx512Server struct {
digests map[uint64][Size]byte // Map of uids to (interim) digest results
}
// Info for each lane
// Avx512LaneInfo - Info for each lane
type Avx512LaneInfo struct {
uid uint64 // unique identification for this SHA processing
block []byte // input block to be processed
outputCh chan [Size]byte // channel for output result
}
// Create new object for parallel processing handling
// NewAvx512Server - Create new object for parallel processing handling
func NewAvx512Server() *Avx512Server {
a512srv := &Avx512Server{}
a512srv.digests = make(map[uint64][Size]byte)
@ -316,7 +318,7 @@ func NewAvx512Server() *Avx512Server {
return a512srv
}
// Sole handler for reading from the input channel
// Process - Sole handler for reading from the input channel
func (a512srv *Avx512Server) Process() {
for {
select {
@ -363,7 +365,7 @@ func (a512srv *Avx512Server) reset(uid uint64) {
if lane.uid == uid {
if lane.block != nil {
a512srv.lanes[i] = Avx512LaneInfo{} // clear message
a512srv.totalIn -= 1
a512srv.totalIn--
}
}
}
@ -403,6 +405,7 @@ func (a512srv *Avx512Server) Write(uid uint64, p []byte) (nn int, err error) {
return len(p), nil
}
// Sum - return sha256 sum in bytes for a given sum id.
func (a512srv *Avx512Server) Sum(uid uint64, p []byte) [32]byte {
sumCh := make(chan [32]byte)
a512srv.blocksCh <- blockInput{uid: uid, msg: p, final: true, sumCh: sumCh}

File diff suppressed because one or more lines are too long

View File

@ -35,330 +35,329 @@
#include "textflag.h"
#define ROTATE_XS \
MOVOU X4, X15 \
MOVOU X5, X4 \
MOVOU X6, X5 \
MOVOU X7, X6 \
MOVOU X15, X7
MOVOU X4, X15 \
MOVOU X5, X4 \
MOVOU X6, X5 \
MOVOU X7, X6 \
MOVOU X15, X7
// compute s0 four at a time and s1 two at a time
// compute W[-16] + W[-7] 4 at a time
#define FOUR_ROUNDS_AND_SCHED(a, b, c, d, e, f, g, h) \
MOVL e, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL a, R14 \ /* y1 = a */
LONG $0x0f41e3c4; WORD $0x04c6 \ // VPALIGNR XMM0,XMM7,XMM6,0x4 /* XTMP0 = W[-7] */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
XORL e, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL f, R15 \ /* y2 = f */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL a, R14 \ /* y1 = a ^ (a >> (22-13) */
XORL g, R15 \ /* y2 = f^g */
LONG $0xc4fef9c5 \ // VPADDD XMM0,XMM0,XMM4 /* XTMP0 = W[-7] + W[-16] */
XORL e, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6) ) */
ANDL e, R15 \ /* y2 = (f^g)&e */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
\ /* */
\ /* compute s0 */
\ /* */
LONG $0x0f51e3c4; WORD $0x04cc \ // VPALIGNR XMM1,XMM5,XMM4,0x4 /* XTMP1 = W[-15] */
XORL a, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL g, R15 \ /* y2 = CH = ((f^g)&e)^g */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+48(FP), R15 \ /* y2 = k + w + S1 + CH */
MOVL a, R13 \ /* y0 = a */
ADDL R15, h \ /* h = h + S1 + CH + k + w */
\ /* ROTATE_ARGS */
MOVL a, R15 \ /* y2 = a */
LONG $0xd172e9c5; BYTE $0x07 \ // VPSRLD XMM2,XMM1,0x7 /* */
ORL c, R13 \ /* y0 = a|c */
ADDL h, d \ /* d = d + h + S1 + CH + k + w */
ANDL c, R15 \ /* y2 = a&c */
LONG $0xf172e1c5; BYTE $0x19 \ // VPSLLD XMM3,XMM1,0x19 /* */
ANDL b, R13 \ /* y0 = (a|c)&b */
ADDL R14, h \ /* h = h + S1 + CH + k + w + S0 */
LONG $0xdaebe1c5 \ // VPOR XMM3,XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, h \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL d, R13 \ /* y0 = e */
MOVL h, R14 \ /* y1 = a */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
XORL d, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL e, R15 \ /* y2 = f */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
LONG $0xd172e9c5; BYTE $0x12 \ // VPSRLD XMM2,XMM1,0x12 /* */
XORL h, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL f, R15 \ /* y2 = f^g */
LONG $0xd172b9c5; BYTE $0x03 \ // VPSRLD XMM8,XMM1,0x3 /* XTMP4 = W[-15] >> 3 */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
XORL d, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL d, R15 \ /* y2 = (f^g)&e */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
LONG $0xf172f1c5; BYTE $0x0e \ // VPSLLD XMM1,XMM1,0xe /* */
XORL h, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
XORL f, R15 \ /* y2 = CH = ((f^g)&e)^g */
LONG $0xd9efe1c5 \ // VPXOR XMM3,XMM3,XMM1 /* */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+52(FP), R15 \ /* y2 = k + w + S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
LONG $0xdaefe1c5 \ // VPXOR XMM3,XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 ^ W[-15] MY_ROR */
MOVL h, R13 \ /* y0 = a */
ADDL R15, g \ /* h = h + S1 + CH + k + w */
MOVL h, R15 \ /* y2 = a */
LONG $0xef61c1c4; BYTE $0xc8 \ // VPXOR XMM1,XMM3,XMM8 /* XTMP1 = s0 */
ORL b, R13 \ /* y0 = a|c */
ADDL g, c \ /* d = d + h + S1 + CH + k + w */
ANDL b, R15 \ /* y2 = a&c */
\ /* */
\ /* compute low s1 */
\ /* */
LONG $0xd770f9c5; BYTE $0xfa \ // VPSHUFD XMM2,XMM7,0xfa /* XTMP2 = W[-2] {BBAA} */
ANDL a, R13 \ /* y0 = (a|c)&b */
ADDL R14, g \ /* h = h + S1 + CH + k + w + S0 */
LONG $0xc1fef9c5 \ // VPADDD XMM0,XMM0,XMM1 /* XTMP0 = W[-16] + W[-7] + s0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, g \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL c, R13 \ /* y0 = e */
MOVL g, R14 \ /* y1 = a */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
XORL c, R13 \ /* y0 = e ^ (e >> (25-11)) */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
MOVL d, R15 \ /* y2 = f */
XORL g, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
LONG $0xd272b9c5; BYTE $0x0a \ // VPSRLD XMM8,XMM2,0xa /* XTMP4 = W[-2] >> 10 {BBAA} */
XORL e, R15 \ /* y2 = f^g */
LONG $0xd273e1c5; BYTE $0x13 \ // VPSRLQ XMM3,XMM2,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xBxA} */
XORL c, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL c, R15 \ /* y2 = (f^g)&e */
LONG $0xd273e9c5; BYTE $0x11 \ // VPSRLQ XMM2,XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xBxA} */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
XORL g, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
XORL e, R15 \ /* y2 = CH = ((f^g)&e)^g */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
LONG $0xd3efe9c5 \ // VPXOR XMM2,XMM2,XMM3 /* */
ADDL R13, R15 \ /* y2 = S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL _xfer+56(FP), R15 \ /* y2 = k + w + S1 + CH */
LONG $0xc2ef39c5 \ // VPXOR XMM8,XMM8,XMM2 /* XTMP4 = s1 {xBxA} */
MOVL g, R13 \ /* y0 = a */
ADDL R15, f \ /* h = h + S1 + CH + k + w */
MOVL g, R15 \ /* y2 = a */
LONG $0x003942c4; BYTE $0xc2 \ // VPSHUFB XMM8,XMM8,XMM10 /* XTMP4 = s1 {00BA} */
ORL a, R13 \ /* y0 = a|c */
ADDL f, b \ /* d = d + h + S1 + CH + k + w */
ANDL a, R15 \ /* y2 = a&c */
LONG $0xfe79c1c4; BYTE $0xc0 \ // VPADDD XMM0,XMM0,XMM8 /* XTMP0 = {..., ..., W[1], W[0]} */
ANDL h, R13 \ /* y0 = (a|c)&b */
ADDL R14, f \ /* h = h + S1 + CH + k + w + S0 */
\ /* */
\ /* compute high s1 */
\ /* */
LONG $0xd070f9c5; BYTE $0x50 \ // VPSHUFD XMM2,XMM0,0x50 /* XTMP2 = W[-2] {DDCC} */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, f \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL b, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL f, R14 \ /* y1 = a */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
XORL b, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL c, R15 \ /* y2 = f */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
LONG $0xd272a1c5; BYTE $0x0a \ // VPSRLD XMM11,XMM2,0xa /* XTMP5 = W[-2] >> 10 {DDCC} */
XORL f, R14 \ /* y1 = a ^ (a >> (22-13) */
XORL d, R15 \ /* y2 = f^g */
LONG $0xd273e1c5; BYTE $0x13 \ // VPSRLQ XMM3,XMM2,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xDxC} */
XORL b, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL b, R15 \ /* y2 = (f^g)&e */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
LONG $0xd273e9c5; BYTE $0x11 \ // VPSRLQ XMM2,XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xDxC} */
XORL f, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL d, R15 \ /* y2 = CH = ((f^g)&e)^g */
LONG $0xd3efe9c5 \ // VPXOR XMM2,XMM2,XMM3 /* */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+60(FP), R15 \ /* y2 = k + w + S1 + CH */
LONG $0xdaef21c5 \ // VPXOR XMM11,XMM11,XMM2 /* XTMP5 = s1 {xDxC} */
MOVL f, R13 \ /* y0 = a */
ADDL R15, e \ /* h = h + S1 + CH + k + w */
MOVL f, R15 \ /* y2 = a */
LONG $0x002142c4; BYTE $0xdc \ // VPSHUFB XMM11,XMM11,XMM12 /* XTMP5 = s1 {DC00} */
ORL h, R13 \ /* y0 = a|c */
ADDL e, a \ /* d = d + h + S1 + CH + k + w */
ANDL h, R15 \ /* y2 = a&c */
LONG $0xe0fea1c5 \ // VPADDD XMM4,XMM11,XMM0 /* X0 = {W[3], W[2], W[1], W[0]} */
ANDL g, R13 \ /* y0 = (a|c)&b */
ADDL R14, e \ /* h = h + S1 + CH + k + w + S0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, e \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
ROTATE_XS
MOVL e, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL a, R14 \ // y1 = a
LONG $0x0f41e3c4; WORD $0x04c6 \ // VPALIGNR XMM0,XMM7,XMM6,0x4 /* XTMP0 = W[-7] */
ROLL $23, R14 \ // y1 = a >> (22-13)
XORL e, R13 \ // y0 = e ^ (e >> (25-11))
MOVL f, R15 \ // y2 = f
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL a, R14 \ // y1 = a ^ (a >> (22-13)
XORL g, R15 \ // y2 = f^g
LONG $0xc4fef9c5 \ // VPADDD XMM0,XMM0,XMM4 /* XTMP0 = W[-7] + W[-16] */
XORL e, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6) )
ANDL e, R15 \ // y2 = (f^g)&e
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
\
\ // compute s0
\
LONG $0x0f51e3c4; WORD $0x04cc \ // VPALIGNR XMM1,XMM5,XMM4,0x4 /* XTMP1 = W[-15] */
XORL a, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL g, R15 \ // y2 = CH = ((f^g)&e)^g
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+48(FP), R15 \ // y2 = k + w + S1 + CH
MOVL a, R13 \ // y0 = a
ADDL R15, h \ // h = h + S1 + CH + k + w
\ // ROTATE_ARGS
MOVL a, R15 \ // y2 = a
LONG $0xd172e9c5; BYTE $0x07 \ // VPSRLD XMM2,XMM1,0x7 /* */
ORL c, R13 \ // y0 = a|c
ADDL h, d \ // d = d + h + S1 + CH + k + w
ANDL c, R15 \ // y2 = a&c
LONG $0xf172e1c5; BYTE $0x19 \ // VPSLLD XMM3,XMM1,0x19 /* */
ANDL b, R13 \ // y0 = (a|c)&b
ADDL R14, h \ // h = h + S1 + CH + k + w + S0
LONG $0xdaebe1c5 \ // VPOR XMM3,XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, h \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL d, R13 \ // y0 = e
MOVL h, R14 \ // y1 = a
ROLL $18, R13 \ // y0 = e >> (25-11)
XORL d, R13 \ // y0 = e ^ (e >> (25-11))
MOVL e, R15 \ // y2 = f
ROLL $23, R14 \ // y1 = a >> (22-13)
LONG $0xd172e9c5; BYTE $0x12 \ // VPSRLD XMM2,XMM1,0x12 /* */
XORL h, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL f, R15 \ // y2 = f^g
LONG $0xd172b9c5; BYTE $0x03 \ // VPSRLD XMM8,XMM1,0x3 /* XTMP4 = W[-15] >> 3 */
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
XORL d, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL d, R15 \ // y2 = (f^g)&e
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
LONG $0xf172f1c5; BYTE $0x0e \ // VPSLLD XMM1,XMM1,0xe /* */
XORL h, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
XORL f, R15 \ // y2 = CH = ((f^g)&e)^g
LONG $0xd9efe1c5 \ // VPXOR XMM3,XMM3,XMM1 /* */
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+52(FP), R15 \ // y2 = k + w + S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
LONG $0xdaefe1c5 \ // VPXOR XMM3,XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 ^ W[-15] MY_ROR */
MOVL h, R13 \ // y0 = a
ADDL R15, g \ // h = h + S1 + CH + k + w
MOVL h, R15 \ // y2 = a
LONG $0xef61c1c4; BYTE $0xc8 \ // VPXOR XMM1,XMM3,XMM8 /* XTMP1 = s0 */
ORL b, R13 \ // y0 = a|c
ADDL g, c \ // d = d + h + S1 + CH + k + w
ANDL b, R15 \ // y2 = a&c
\
\ // compute low s1
\
LONG $0xd770f9c5; BYTE $0xfa \ // VPSHUFD XMM2,XMM7,0xfa /* XTMP2 = W[-2] {BBAA} */
ANDL a, R13 \ // y0 = (a|c)&b
ADDL R14, g \ // h = h + S1 + CH + k + w + S0
LONG $0xc1fef9c5 \ // VPADDD XMM0,XMM0,XMM1 /* XTMP0 = W[-16] + W[-7] + s0 */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, g \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL c, R13 \ // y0 = e
MOVL g, R14 \ // y1 = a
ROLL $18, R13 \ // y0 = e >> (25-11)
XORL c, R13 \ // y0 = e ^ (e >> (25-11))
ROLL $23, R14 \ // y1 = a >> (22-13)
MOVL d, R15 \ // y2 = f
XORL g, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
LONG $0xd272b9c5; BYTE $0x0a \ // VPSRLD XMM8,XMM2,0xa /* XTMP4 = W[-2] >> 10 {BBAA} */
XORL e, R15 \ // y2 = f^g
LONG $0xd273e1c5; BYTE $0x13 \ // VPSRLQ XMM3,XMM2,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xBxA} */
XORL c, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL c, R15 \ // y2 = (f^g)&e
LONG $0xd273e9c5; BYTE $0x11 \ // VPSRLQ XMM2,XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xBxA} */
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
XORL g, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
XORL e, R15 \ // y2 = CH = ((f^g)&e)^g
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
LONG $0xd3efe9c5 \ // VPXOR XMM2,XMM2,XMM3 /* */
ADDL R13, R15 \ // y2 = S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL _xfer+56(FP), R15 \ // y2 = k + w + S1 + CH
LONG $0xc2ef39c5 \ // VPXOR XMM8,XMM8,XMM2 /* XTMP4 = s1 {xBxA} */
MOVL g, R13 \ // y0 = a
ADDL R15, f \ // h = h + S1 + CH + k + w
MOVL g, R15 \ // y2 = a
LONG $0x003942c4; BYTE $0xc2 \ // VPSHUFB XMM8,XMM8,XMM10 /* XTMP4 = s1 {00BA} */
ORL a, R13 \ // y0 = a|c
ADDL f, b \ // d = d + h + S1 + CH + k + w
ANDL a, R15 \ // y2 = a&c
LONG $0xfe79c1c4; BYTE $0xc0 \ // VPADDD XMM0,XMM0,XMM8 /* XTMP0 = {..., ..., W[1], W[0]} */
ANDL h, R13 \ // y0 = (a|c)&b
ADDL R14, f \ // h = h + S1 + CH + k + w + S0
\
\ // compute high s1
\
LONG $0xd070f9c5; BYTE $0x50 \ // VPSHUFD XMM2,XMM0,0x50 /* XTMP2 = W[-2] {DDCC} */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, f \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL b, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL f, R14 \ // y1 = a
ROLL $23, R14 \ // y1 = a >> (22-13)
XORL b, R13 \ // y0 = e ^ (e >> (25-11))
MOVL c, R15 \ // y2 = f
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
LONG $0xd272a1c5; BYTE $0x0a \ // VPSRLD XMM11,XMM2,0xa /* XTMP5 = W[-2] >> 10 {DDCC} */
XORL f, R14 \ // y1 = a ^ (a >> (22-13)
XORL d, R15 \ // y2 = f^g
LONG $0xd273e1c5; BYTE $0x13 \ // VPSRLQ XMM3,XMM2,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xDxC} */
XORL b, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL b, R15 \ // y2 = (f^g)&e
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
LONG $0xd273e9c5; BYTE $0x11 \ // VPSRLQ XMM2,XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xDxC} */
XORL f, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL d, R15 \ // y2 = CH = ((f^g)&e)^g
LONG $0xd3efe9c5 \ // VPXOR XMM2,XMM2,XMM3 /* */
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+60(FP), R15 \ // y2 = k + w + S1 + CH
LONG $0xdaef21c5 \ // VPXOR XMM11,XMM11,XMM2 /* XTMP5 = s1 {xDxC} */
MOVL f, R13 \ // y0 = a
ADDL R15, e \ // h = h + S1 + CH + k + w
MOVL f, R15 \ // y2 = a
LONG $0x002142c4; BYTE $0xdc \ // VPSHUFB XMM11,XMM11,XMM12 /* XTMP5 = s1 {DC00} */
ORL h, R13 \ // y0 = a|c
ADDL e, a \ // d = d + h + S1 + CH + k + w
ANDL h, R15 \ // y2 = a&c
LONG $0xe0fea1c5 \ // VPADDD XMM4,XMM11,XMM0 /* X0 = {W[3], W[2], W[1], W[0]} */
ANDL g, R13 \ // y0 = (a|c)&b
ADDL R14, e \ // h = h + S1 + CH + k + w + S0
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, e \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
ROTATE_XS
#define DO_ROUND(a, b, c, d, e, f, g, h, offset) \
MOVL e, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL a, R14 \ /* y1 = a */
XORL e, R13 \ /* y0 = e ^ (e >> (25-11)) */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
MOVL f, R15 \ /* y2 = f */
XORL a, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL g, R15 \ /* y2 = f^g */
XORL e, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
ANDL e, R15 \ /* y2 = (f^g)&e */
XORL a, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL g, R15 \ /* y2 = CH = ((f^g)&e)^g */
ADDL R13, R15 \ /* y2 = S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL _xfer+offset(FP), R15 \ /* y2 = k + w + S1 + CH */
MOVL a, R13 \ /* y0 = a */
ADDL R15, h \ /* h = h + S1 + CH + k + w */
MOVL a, R15 \ /* y2 = a */
ORL c, R13 \ /* y0 = a|c */
ADDL h, d \ /* d = d + h + S1 + CH + k + w */
ANDL c, R15 \ /* y2 = a&c */
ANDL b, R13 \ /* y0 = (a|c)&b */
ADDL R14, h \ /* h = h + S1 + CH + k + w + S0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, h /* h = h + S1 + CH + k + w + S0 + MAJ */
MOVL e, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL a, R14 \ // y1 = a
XORL e, R13 \ // y0 = e ^ (e >> (25-11))
ROLL $23, R14 \ // y1 = a >> (22-13)
MOVL f, R15 \ // y2 = f
XORL a, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL g, R15 \ // y2 = f^g
XORL e, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
ANDL e, R15 \ // y2 = (f^g)&e
XORL a, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL g, R15 \ // y2 = CH = ((f^g)&e)^g
ADDL R13, R15 \ // y2 = S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL _xfer+offset(FP), R15 \ // y2 = k + w + S1 + CH
MOVL a, R13 \ // y0 = a
ADDL R15, h \ // h = h + S1 + CH + k + w
MOVL a, R15 \ // y2 = a
ORL c, R13 \ // y0 = a|c
ADDL h, d \ // d = d + h + S1 + CH + k + w
ANDL c, R15 \ // y2 = a&c
ANDL b, R13 \ // y0 = (a|c)&b
ADDL R14, h \ // h = h + S1 + CH + k + w + S0
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, h // h = h + S1 + CH + k + w + S0 + MAJ
// func blockAvx(h []uint32, message []uint8, reserved0, reserved1, reserved2, reserved3 uint64)
TEXT ·blockAvx(SB), 7, $0
MOVQ h+0(FP), SI // SI: &h
MOVQ message+24(FP), R8 // &message
MOVQ lenmessage+32(FP), R9 // length of message
CMPQ R9, $0
JEQ done_hash
ADDQ R8, R9
MOVQ R9, _inp_end+64(FP) // store end of message
MOVQ h+0(FP), SI // SI: &h
MOVQ message+24(FP), R8 // &message
MOVQ lenmessage+32(FP), R9 // length of message
CMPQ R9, $0
JEQ done_hash
ADDQ R8, R9
MOVQ R9, _inp_end+64(FP) // store end of message
// Register definition
// a --> eax
// b --> ebx
// c --> ecx
// d --> r8d
// e --> edx
// f --> r9d
// g --> r10d
// h --> r11d
//
// y0 --> r13d
// y1 --> r14d
// y2 --> r15d
// Register definition
// a --> eax
// b --> ebx
// c --> ecx
// d --> r8d
// e --> edx
// f --> r9d
// g --> r10d
// h --> r11d
//
// y0 --> r13d
// y1 --> r14d
// y2 --> r15d
MOVL (0*4)(SI), AX // a = H0
MOVL (1*4)(SI), BX // b = H1
MOVL (2*4)(SI), CX // c = H2
MOVL (3*4)(SI), R8 // d = H3
MOVL (4*4)(SI), DX // e = H4
MOVL (5*4)(SI), R9 // f = H5
MOVL (6*4)(SI), R10 // g = H6
MOVL (7*4)(SI), R11 // h = H7
MOVL (0*4)(SI), AX // a = H0
MOVL (1*4)(SI), BX // b = H1
MOVL (2*4)(SI), CX // c = H2
MOVL (3*4)(SI), R8 // d = H3
MOVL (4*4)(SI), DX // e = H4
MOVL (5*4)(SI), R9 // f = H5
MOVL (6*4)(SI), R10 // g = H6
MOVL (7*4)(SI), R11 // h = H7
MOVOU bflipMask<>(SB), X13
MOVOU shuf00BA<>(SB), X10 // shuffle xBxA -> 00BA
MOVOU shufDC00<>(SB), X12 // shuffle xDxC -> DC00
MOVOU shuf00BA<>(SB), X10 // shuffle xBxA -> 00BA
MOVOU shufDC00<>(SB), X12 // shuffle xDxC -> DC00
MOVQ message+24(FP), SI // SI: &message
MOVQ message+24(FP), SI // SI: &message
loop0:
LEAQ constants<>(SB), BP
// byte swap first 16 dwords
MOVOU 0*16(SI), X4
LONG $0x0059c2c4; BYTE $0xe5 // VPSHUFB XMM4, XMM4, XMM13
MOVOU 1*16(SI), X5
LONG $0x0051c2c4; BYTE $0xed // VPSHUFB XMM5, XMM5, XMM13
MOVOU 2*16(SI), X6
LONG $0x0049c2c4; BYTE $0xf5 // VPSHUFB XMM6, XMM6, XMM13
MOVOU 3*16(SI), X7
LONG $0x0041c2c4; BYTE $0xfd // VPSHUFB XMM7, XMM7, XMM13
MOVOU 0*16(SI), X4
LONG $0x0059c2c4; BYTE $0xe5 // VPSHUFB XMM4, XMM4, XMM13
MOVOU 1*16(SI), X5
LONG $0x0051c2c4; BYTE $0xed // VPSHUFB XMM5, XMM5, XMM13
MOVOU 2*16(SI), X6
LONG $0x0049c2c4; BYTE $0xf5 // VPSHUFB XMM6, XMM6, XMM13
MOVOU 3*16(SI), X7
LONG $0x0041c2c4; BYTE $0xfd // VPSHUFB XMM7, XMM7, XMM13
MOVQ SI, _inp+72(FP)
MOVD $0x3, DI
MOVQ SI, _inp+72(FP)
MOVD $0x3, DI
// schedule 48 input dwords, by doing 3 rounds of 16 each
loop1:
LONG $0x4dfe59c5; BYTE $0x00 // VPADDD XMM9, XMM4, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
LONG $0x4dfe59c5; BYTE $0x00 // VPADDD XMM9, XMM4, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
LONG $0x4dfe59c5; BYTE $0x10 // VPADDD XMM9, XMM4, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
LONG $0x4dfe59c5; BYTE $0x10 // VPADDD XMM9, XMM4, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
LONG $0x4dfe59c5; BYTE $0x20 // VPADDD XMM9, XMM4, 32[RBP] /* Add 3rd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
LONG $0x4dfe59c5; BYTE $0x20 // VPADDD XMM9, XMM4, 32[RBP] /* Add 3rd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
LONG $0x4dfe59c5; BYTE $0x30 // VPADDD XMM9, XMM4, 48[RBP] /* Add 4th constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $64, BP
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
LONG $0x4dfe59c5; BYTE $0x30 // VPADDD XMM9, XMM4, 48[RBP] /* Add 4th constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $64, BP
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
SUBQ $1, DI
JNE loop1
SUBQ $1, DI
JNE loop1
MOVD $0x2, DI
MOVD $0x2, DI
loop2:
LONG $0x4dfe59c5; BYTE $0x00 // VPADDD XMM9, XMM4, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
DO_ROUND( AX, BX, CX, R8, DX, R9, R10, R11, 48)
DO_ROUND(R11, AX, BX, CX, R8, DX, R9, R10, 52)
DO_ROUND(R10, R11, AX, BX, CX, R8, DX, R9, 56)
DO_ROUND( R9, R10, R11, AX, BX, CX, R8, DX, 60)
LONG $0x4dfe59c5; BYTE $0x00 // VPADDD XMM9, XMM4, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
DO_ROUND( AX, BX, CX, R8, DX, R9, R10, R11, 48)
DO_ROUND(R11, AX, BX, CX, R8, DX, R9, R10, 52)
DO_ROUND(R10, R11, AX, BX, CX, R8, DX, R9, 56)
DO_ROUND( R9, R10, R11, AX, BX, CX, R8, DX, 60)
LONG $0x4dfe51c5; BYTE $0x10 // VPADDD XMM9, XMM5, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $32, BP
DO_ROUND( DX, R9, R10, R11, AX, BX, CX, R8, 48)
DO_ROUND( R8, DX, R9, R10, R11, AX, BX, CX, 52)
DO_ROUND( CX, R8, DX, R9, R10, R11, AX, BX, 56)
DO_ROUND( BX, CX, R8, DX, R9, R10, R11, AX, 60)
LONG $0x4dfe51c5; BYTE $0x10 // VPADDD XMM9, XMM5, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $32, BP
DO_ROUND( DX, R9, R10, R11, AX, BX, CX, R8, 48)
DO_ROUND( R8, DX, R9, R10, R11, AX, BX, CX, 52)
DO_ROUND( CX, R8, DX, R9, R10, R11, AX, BX, 56)
DO_ROUND( BX, CX, R8, DX, R9, R10, R11, AX, 60)
MOVOU X6, X4
MOVOU X7, X5
MOVOU X6, X4
MOVOU X7, X5
SUBQ $1, DI
JNE loop2
SUBQ $1, DI
JNE loop2
MOVQ h+0(FP), SI // SI: &h
ADDL (0*4)(SI), AX // H0 = a + H0
MOVL AX, (0*4)(SI)
ADDL (1*4)(SI), BX // H1 = b + H1
MOVL BX, (1*4)(SI)
ADDL (2*4)(SI), CX // H2 = c + H2
MOVL CX, (2*4)(SI)
ADDL (3*4)(SI), R8 // H3 = d + H3
MOVL R8, (3*4)(SI)
ADDL (4*4)(SI), DX // H4 = e + H4
MOVL DX, (4*4)(SI)
ADDL (5*4)(SI), R9 // H5 = f + H5
MOVL R9, (5*4)(SI)
ADDL (6*4)(SI), R10 // H6 = g + H6
MOVL R10, (6*4)(SI)
ADDL (7*4)(SI), R11 // H7 = h + H7
MOVL R11, (7*4)(SI)
MOVQ h+0(FP), SI // SI: &h
ADDL (0*4)(SI), AX // H0 = a + H0
MOVL AX, (0*4)(SI)
ADDL (1*4)(SI), BX // H1 = b + H1
MOVL BX, (1*4)(SI)
ADDL (2*4)(SI), CX // H2 = c + H2
MOVL CX, (2*4)(SI)
ADDL (3*4)(SI), R8 // H3 = d + H3
MOVL R8, (3*4)(SI)
ADDL (4*4)(SI), DX // H4 = e + H4
MOVL DX, (4*4)(SI)
ADDL (5*4)(SI), R9 // H5 = f + H5
MOVL R9, (5*4)(SI)
ADDL (6*4)(SI), R10 // H6 = g + H6
MOVL R10, (6*4)(SI)
ADDL (7*4)(SI), R11 // H7 = h + H7
MOVL R11, (7*4)(SI)
MOVQ _inp+72(FP), SI
MOVQ _inp+72(FP), SI
ADDQ $64, SI
CMPQ _inp_end+64(FP), SI
JNE loop0
JNE loop0
done_hash:
RET
RET
// Constants table
DATA constants<>+0x0(SB)/8, $0x71374491428a2f98

View File

@ -0,0 +1,6 @@
//+build !noasm
package sha256
//go:noescape
func blockSha(h *[8]uint32, message []uint8)

View File

@ -0,0 +1,266 @@
//+build !noasm !appengine
// SHA intrinsic version of SHA256
// Minio Cloud Storage, (C) 2018 Minio, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
#include "textflag.h"
DATA K<>+0x00(SB)/4, $0x428a2f98
DATA K<>+0x04(SB)/4, $0x71374491
DATA K<>+0x08(SB)/4, $0xb5c0fbcf
DATA K<>+0x0c(SB)/4, $0xe9b5dba5
DATA K<>+0x10(SB)/4, $0x3956c25b
DATA K<>+0x14(SB)/4, $0x59f111f1
DATA K<>+0x18(SB)/4, $0x923f82a4
DATA K<>+0x1c(SB)/4, $0xab1c5ed5
DATA K<>+0x20(SB)/4, $0xd807aa98
DATA K<>+0x24(SB)/4, $0x12835b01
DATA K<>+0x28(SB)/4, $0x243185be
DATA K<>+0x2c(SB)/4, $0x550c7dc3
DATA K<>+0x30(SB)/4, $0x72be5d74
DATA K<>+0x34(SB)/4, $0x80deb1fe
DATA K<>+0x38(SB)/4, $0x9bdc06a7
DATA K<>+0x3c(SB)/4, $0xc19bf174
DATA K<>+0x40(SB)/4, $0xe49b69c1
DATA K<>+0x44(SB)/4, $0xefbe4786
DATA K<>+0x48(SB)/4, $0x0fc19dc6
DATA K<>+0x4c(SB)/4, $0x240ca1cc
DATA K<>+0x50(SB)/4, $0x2de92c6f
DATA K<>+0x54(SB)/4, $0x4a7484aa
DATA K<>+0x58(SB)/4, $0x5cb0a9dc
DATA K<>+0x5c(SB)/4, $0x76f988da
DATA K<>+0x60(SB)/4, $0x983e5152
DATA K<>+0x64(SB)/4, $0xa831c66d
DATA K<>+0x68(SB)/4, $0xb00327c8
DATA K<>+0x6c(SB)/4, $0xbf597fc7
DATA K<>+0x70(SB)/4, $0xc6e00bf3
DATA K<>+0x74(SB)/4, $0xd5a79147
DATA K<>+0x78(SB)/4, $0x06ca6351
DATA K<>+0x7c(SB)/4, $0x14292967
DATA K<>+0x80(SB)/4, $0x27b70a85
DATA K<>+0x84(SB)/4, $0x2e1b2138
DATA K<>+0x88(SB)/4, $0x4d2c6dfc
DATA K<>+0x8c(SB)/4, $0x53380d13
DATA K<>+0x90(SB)/4, $0x650a7354
DATA K<>+0x94(SB)/4, $0x766a0abb
DATA K<>+0x98(SB)/4, $0x81c2c92e
DATA K<>+0x9c(SB)/4, $0x92722c85
DATA K<>+0xa0(SB)/4, $0xa2bfe8a1
DATA K<>+0xa4(SB)/4, $0xa81a664b
DATA K<>+0xa8(SB)/4, $0xc24b8b70
DATA K<>+0xac(SB)/4, $0xc76c51a3
DATA K<>+0xb0(SB)/4, $0xd192e819
DATA K<>+0xb4(SB)/4, $0xd6990624
DATA K<>+0xb8(SB)/4, $0xf40e3585
DATA K<>+0xbc(SB)/4, $0x106aa070
DATA K<>+0xc0(SB)/4, $0x19a4c116
DATA K<>+0xc4(SB)/4, $0x1e376c08
DATA K<>+0xc8(SB)/4, $0x2748774c
DATA K<>+0xcc(SB)/4, $0x34b0bcb5
DATA K<>+0xd0(SB)/4, $0x391c0cb3
DATA K<>+0xd4(SB)/4, $0x4ed8aa4a
DATA K<>+0xd8(SB)/4, $0x5b9cca4f
DATA K<>+0xdc(SB)/4, $0x682e6ff3
DATA K<>+0xe0(SB)/4, $0x748f82ee
DATA K<>+0xe4(SB)/4, $0x78a5636f
DATA K<>+0xe8(SB)/4, $0x84c87814
DATA K<>+0xec(SB)/4, $0x8cc70208
DATA K<>+0xf0(SB)/4, $0x90befffa
DATA K<>+0xf4(SB)/4, $0xa4506ceb
DATA K<>+0xf8(SB)/4, $0xbef9a3f7
DATA K<>+0xfc(SB)/4, $0xc67178f2
GLOBL K<>(SB), RODATA|NOPTR, $256
DATA SHUF_MASK<>+0x00(SB)/8, $0x0405060700010203
DATA SHUF_MASK<>+0x08(SB)/8, $0x0c0d0e0f08090a0b
GLOBL SHUF_MASK<>(SB), RODATA|NOPTR, $16
// Register Usage
// BX base address of constant table (constant)
// DX hash_state (constant)
// SI hash_data.data
// DI hash_data.data + hash_data.length - 64 (constant)
// X0 scratch
// X1 scratch
// X2 working hash state // ABEF
// X3 working hash state // CDGH
// X4 first 16 bytes of block
// X5 second 16 bytes of block
// X6 third 16 bytes of block
// X7 fourth 16 bytes of block
// X12 saved hash state // ABEF
// X13 saved hash state // CDGH
// X15 data shuffle mask (constant)
TEXT ·blockSha(SB), NOSPLIT, $0-32
MOVQ h+0(FP), DX
MOVQ message_base+8(FP), SI
MOVQ message_len+16(FP), DI
LEAQ -64(SI)(DI*1), DI
MOVOU (DX), X2
MOVOU 16(DX), X1
MOVO X2, X3
PUNPCKLLQ X1, X2
PUNPCKHLQ X1, X3
PSHUFD $0x27, X2, X2
PSHUFD $0x27, X3, X3
MOVO SHUF_MASK<>(SB), X15
LEAQ K<>(SB), BX
JMP TEST
LOOP:
MOVO X2, X12
MOVO X3, X13
// load block and shuffle
MOVOU (SI), X4
MOVOU 16(SI), X5
MOVOU 32(SI), X6
MOVOU 48(SI), X7
PSHUFB X15, X4
PSHUFB X15, X5
PSHUFB X15, X6
PSHUFB X15, X7
#define ROUND456 \
PADDL X5, X0 \
LONG $0xdacb380f \ // SHA256RNDS2 XMM3, XMM2
MOVO X5, X1 \
LONG $0x0f3a0f66; WORD $0x04cc \ // PALIGNR XMM1, XMM4, 4
PADDL X1, X6 \
LONG $0xf5cd380f \ // SHA256MSG2 XMM6, XMM5
PSHUFD $0x4e, X0, X0 \
LONG $0xd3cb380f \ // SHA256RNDS2 XMM2, XMM3
LONG $0xe5cc380f // SHA256MSG1 XMM4, XMM5
#define ROUND567 \
PADDL X6, X0 \
LONG $0xdacb380f \ // SHA256RNDS2 XMM3, XMM2
MOVO X6, X1 \
LONG $0x0f3a0f66; WORD $0x04cd \ // PALIGNR XMM1, XMM5, 4
PADDL X1, X7 \
LONG $0xfecd380f \ // SHA256MSG2 XMM7, XMM6
PSHUFD $0x4e, X0, X0 \
LONG $0xd3cb380f \ // SHA256RNDS2 XMM2, XMM3
LONG $0xeecc380f // SHA256MSG1 XMM5, XMM6
#define ROUND674 \
PADDL X7, X0 \
LONG $0xdacb380f \ // SHA256RNDS2 XMM3, XMM2
MOVO X7, X1 \
LONG $0x0f3a0f66; WORD $0x04ce \ // PALIGNR XMM1, XMM6, 4
PADDL X1, X4 \
LONG $0xe7cd380f \ // SHA256MSG2 XMM4, XMM7
PSHUFD $0x4e, X0, X0 \
LONG $0xd3cb380f \ // SHA256RNDS2 XMM2, XMM3
LONG $0xf7cc380f // SHA256MSG1 XMM6, XMM7
#define ROUND745 \
PADDL X4, X0 \
LONG $0xdacb380f \ // SHA256RNDS2 XMM3, XMM2
MOVO X4, X1 \
LONG $0x0f3a0f66; WORD $0x04cf \ // PALIGNR XMM1, XMM7, 4
PADDL X1, X5 \
LONG $0xeccd380f \ // SHA256MSG2 XMM5, XMM4
PSHUFD $0x4e, X0, X0 \
LONG $0xd3cb380f \ // SHA256RNDS2 XMM2, XMM3
LONG $0xfccc380f // SHA256MSG1 XMM7, XMM4
// rounds 0-3
MOVO (BX), X0
PADDL X4, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
// rounds 4-7
MOVO 1*16(BX), X0
PADDL X5, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
LONG $0xe5cc380f // SHA256MSG1 XMM4, XMM5
// rounds 8-11
MOVO 2*16(BX), X0
PADDL X6, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
LONG $0xeecc380f // SHA256MSG1 XMM5, XMM6
MOVO 3*16(BX), X0; ROUND674 // rounds 12-15
MOVO 4*16(BX), X0; ROUND745 // rounds 16-19
MOVO 5*16(BX), X0; ROUND456 // rounds 20-23
MOVO 6*16(BX), X0; ROUND567 // rounds 24-27
MOVO 7*16(BX), X0; ROUND674 // rounds 28-31
MOVO 8*16(BX), X0; ROUND745 // rounds 32-35
MOVO 9*16(BX), X0; ROUND456 // rounds 36-39
MOVO 10*16(BX), X0; ROUND567 // rounds 40-43
MOVO 11*16(BX), X0; ROUND674 // rounds 44-47
MOVO 12*16(BX), X0; ROUND745 // rounds 48-51
// rounds 52-55
MOVO 13*16(BX), X0
PADDL X5, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
MOVO X5, X1
LONG $0x0f3a0f66; WORD $0x04cc // PALIGNR XMM1, XMM4, 4
PADDL X1, X6
LONG $0xf5cd380f // SHA256MSG2 XMM6, XMM5
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
// rounds 56-59
MOVO 14*16(BX), X0
PADDL X6, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
MOVO X6, X1
LONG $0x0f3a0f66; WORD $0x04cd // PALIGNR XMM1, XMM5, 4
PADDL X1, X7
LONG $0xfecd380f // SHA256MSG2 XMM7, XMM6
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
// rounds 60-63
MOVO 15*16(BX), X0
PADDL X7, X0
LONG $0xdacb380f // SHA256RNDS2 XMM3, XMM2
PSHUFD $0x4e, X0, X0
LONG $0xd3cb380f // SHA256RNDS2 XMM2, XMM3
PADDL X12, X2
PADDL X13, X3
ADDQ $64, SI
TEST:
CMPQ SI, DI
JBE LOOP
PSHUFD $0x4e, X3, X0
LONG $0x0e3a0f66; WORD $0xf0c2 // PBLENDW XMM0, XMM2, 0xf0
PSHUFD $0x4e, X2, X1
LONG $0x0e3a0f66; WORD $0x0fcb // PBLENDW XMM1, XMM3, 0x0f
PSHUFD $0x1b, X0, X0
PSHUFD $0x1b, X1, X1
MOVOU X0, (DX)
MOVOU X1, 16(DX)
RET

View File

@ -35,351 +35,350 @@
#include "textflag.h"
#define ROTATE_XS \
MOVOU X4, X15 \
MOVOU X5, X4 \
MOVOU X6, X5 \
MOVOU X7, X6 \
MOVOU X15, X7
MOVOU X4, X15 \
MOVOU X5, X4 \
MOVOU X6, X5 \
MOVOU X7, X6 \
MOVOU X15, X7
// compute s0 four at a time and s1 two at a time
// compute W[-16] + W[-7] 4 at a time
#define FOUR_ROUNDS_AND_SCHED(a, b, c, d, e, f, g, h) \
MOVL e, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL a, R14 \ /* y1 = a */
MOVOU X7, X0 \
LONG $0x0f3a0f66; WORD $0x04c6 \ // PALIGNR XMM0,XMM6,0x4 /* XTMP0 = W[-7] */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
XORL e, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL f, R15 \ /* y2 = f */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL a, R14 \ /* y1 = a ^ (a >> (22-13) */
XORL g, R15 \ /* y2 = f^g */
LONG $0xc4fe0f66 \ // PADDD XMM0,XMM4 /* XTMP0 = W[-7] + W[-16] */
XORL e, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6) ) */
ANDL e, R15 \ /* y2 = (f^g)&e */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
\ /* */
\ /* compute s0 */
\ /* */
MOVOU X5, X1 \
LONG $0x0f3a0f66; WORD $0x04cc \ // PALIGNR XMM1,XMM4,0x4 /* XTMP1 = W[-15] */
XORL a, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL g, R15 \ /* y2 = CH = ((f^g)&e)^g */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+48(FP), R15 \ /* y2 = k + w + S1 + CH */
MOVL a, R13 \ /* y0 = a */
ADDL R15, h \ /* h = h + S1 + CH + k + w */
\ /* ROTATE_ARGS */
MOVL a, R15 \ /* y2 = a */
MOVOU X1, X2 \
LONG $0xd2720f66; BYTE $0x07 \ // PSRLD XMM2,0x7 /* */
ORL c, R13 \ /* y0 = a|c */
ADDL h, d \ /* d = d + h + S1 + CH + k + w */
ANDL c, R15 \ /* y2 = a&c */
MOVOU X1, X3 \
LONG $0xf3720f66; BYTE $0x19 \ // PSLLD XMM3,0x19 /* */
ANDL b, R13 \ /* y0 = (a|c)&b */
ADDL R14, h \ /* h = h + S1 + CH + k + w + S0 */
LONG $0xdaeb0f66 \ // POR XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, h \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL d, R13 \ /* y0 = e */
MOVL h, R14 \ /* y1 = a */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
XORL d, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL e, R15 \ /* y2 = f */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
MOVOU X1, X2 \
LONG $0xd2720f66; BYTE $0x12 \ // PSRLD XMM2,0x12 /* */
XORL h, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL f, R15 \ /* y2 = f^g */
MOVOU X1, X8 \
LONG $0x720f4166; WORD $0x03d0 \ // PSRLD XMM8,0x3 /* XTMP4 = W[-15] >> 3 */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
XORL d, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL d, R15 \ /* y2 = (f^g)&e */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
LONG $0xf1720f66; BYTE $0x0e \ // PSLLD XMM1,0xe /* */
XORL h, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
XORL f, R15 \ /* y2 = CH = ((f^g)&e)^g */
LONG $0xd9ef0f66 \ // PXOR XMM3,XMM1 /* */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+52(FP), R15 \ /* y2 = k + w + S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
LONG $0xdaef0f66 \ // PXOR XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 ^ W[-15] MY_ROR */
MOVL h, R13 \ /* y0 = a */
ADDL R15, g \ /* h = h + S1 + CH + k + w */
MOVL h, R15 \ /* y2 = a */
MOVOU X3, X1 \
LONG $0xef0f4166; BYTE $0xc8 \ // PXOR XMM1,XMM8 /* XTMP1 = s0 */
ORL b, R13 \ /* y0 = a|c */
ADDL g, c \ /* d = d + h + S1 + CH + k + w */
ANDL b, R15 \ /* y2 = a&c */
\ /* */
\ /* compute low s1 */
\ /* */
LONG $0xd7700f66; BYTE $0xfa \ // PSHUFD XMM2,XMM7,0xfa /* XTMP2 = W[-2] {BBAA} */
ANDL a, R13 \ /* y0 = (a|c)&b */
ADDL R14, g \ /* h = h + S1 + CH + k + w + S0 */
LONG $0xc1fe0f66 \ // PADDD XMM0,XMM1 /* XTMP0 = W[-16] + W[-7] + s0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, g \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL c, R13 \ /* y0 = e */
MOVL g, R14 \ /* y1 = a */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
XORL c, R13 \ /* y0 = e ^ (e >> (25-11)) */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
MOVL d, R15 \ /* y2 = f */
XORL g, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
MOVOU X2, X8 \
LONG $0x720f4166; WORD $0x0ad0 \ // PSRLD XMM8,0xa /* XTMP4 = W[-2] >> 10 {BBAA} */
XORL e, R15 \ /* y2 = f^g */
MOVOU X2, X3 \
LONG $0xd3730f66; BYTE $0x13 \ // PSRLQ XMM3,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xBxA} */
XORL c, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL c, R15 \ /* y2 = (f^g)&e */
LONG $0xd2730f66; BYTE $0x11 \ // PSRLQ XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xBxA} */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
XORL g, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
XORL e, R15 \ /* y2 = CH = ((f^g)&e)^g */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
LONG $0xd3ef0f66 \ // PXOR XMM2,XMM3 /* */
ADDL R13, R15 \ /* y2 = S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL _xfer+56(FP), R15 \ /* y2 = k + w + S1 + CH */
LONG $0xef0f4466; BYTE $0xc2 \ // PXOR XMM8,XMM2 /* XTMP4 = s1 {xBxA} */
MOVL g, R13 \ /* y0 = a */
ADDL R15, f \ /* h = h + S1 + CH + k + w */
MOVL g, R15 \ /* y2 = a */
LONG $0x380f4566; WORD $0xc200 \ // PSHUFB XMM8,XMM10 /* XTMP4 = s1 {00BA} */
ORL a, R13 \ /* y0 = a|c */
ADDL f, b \ /* d = d + h + S1 + CH + k + w */
ANDL a, R15 \ /* y2 = a&c */
LONG $0xfe0f4166; BYTE $0xc0 \ // PADDD XMM0,XMM8 /* XTMP0 = {..., ..., W[1], W[0]} */
ANDL h, R13 \ /* y0 = (a|c)&b */
ADDL R14, f \ /* h = h + S1 + CH + k + w + S0 */
\ /* */
\ /* compute high s1 */
\ /* */
LONG $0xd0700f66; BYTE $0x50 \ // PSHUFD XMM2,XMM0,0x50 /* XTMP2 = W[-2] {DDCC} */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, f \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
MOVL b, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL f, R14 \ /* y1 = a */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
XORL b, R13 \ /* y0 = e ^ (e >> (25-11)) */
MOVL c, R15 \ /* y2 = f */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
MOVOU X2, X11 \
LONG $0x720f4166; WORD $0x0ad3 \ // PSRLD XMM11,0xa /* XTMP5 = W[-2] >> 10 {DDCC} */
XORL f, R14 \ /* y1 = a ^ (a >> (22-13) */
XORL d, R15 \ /* y2 = f^g */
MOVOU X2, X3 \
LONG $0xd3730f66; BYTE $0x13 \ // PSRLQ XMM3,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xDxC} */
XORL b, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ANDL b, R15 \ /* y2 = (f^g)&e */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
LONG $0xd2730f66; BYTE $0x11 \ // PSRLQ XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xDxC} */
XORL f, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL d, R15 \ /* y2 = CH = ((f^g)&e)^g */
LONG $0xd3ef0f66 \ // PXOR XMM2,XMM3 /* */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL R13, R15 \ /* y2 = S1 + CH */
ADDL _xfer+60(FP), R15 \ /* y2 = k + w + S1 + CH */
LONG $0xef0f4466; BYTE $0xda \ // PXOR XMM11,XMM2 /* XTMP5 = s1 {xDxC} */
MOVL f, R13 \ /* y0 = a */
ADDL R15, e \ /* h = h + S1 + CH + k + w */
MOVL f, R15 \ /* y2 = a */
LONG $0x380f4566; WORD $0xdc00 \ // PSHUFB XMM11,XMM12 /* XTMP5 = s1 {DC00} */
ORL h, R13 \ /* y0 = a|c */
ADDL e, a \ /* d = d + h + S1 + CH + k + w */
ANDL h, R15 \ /* y2 = a&c */
MOVOU X11, X4 \
LONG $0xe0fe0f66 \ // PADDD XMM4,XMM0 /* X0 = {W[3], W[2], W[1], W[0]} */
ANDL g, R13 \ /* y0 = (a|c)&b */
ADDL R14, e \ /* h = h + S1 + CH + k + w + S0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, e \ /* h = h + S1 + CH + k + w + S0 + MAJ */
\ /* ROTATE_ARGS */
ROTATE_XS
MOVL e, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL a, R14 \ // y1 = a
MOVOU X7, X0 \
LONG $0x0f3a0f66; WORD $0x04c6 \ // PALIGNR XMM0,XMM6,0x4 /* XTMP0 = W[-7] */
ROLL $23, R14 \ // y1 = a >> (22-13)
XORL e, R13 \ // y0 = e ^ (e >> (25-11))
MOVL f, R15 \ // y2 = f
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL a, R14 \ // y1 = a ^ (a >> (22-13)
XORL g, R15 \ // y2 = f^g
LONG $0xc4fe0f66 \ // PADDD XMM0,XMM4 /* XTMP0 = W[-7] + W[-16] */
XORL e, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6) )
ANDL e, R15 \ // y2 = (f^g)&e
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
\
\ // compute s0
\
MOVOU X5, X1 \
LONG $0x0f3a0f66; WORD $0x04cc \ // PALIGNR XMM1,XMM4,0x4 /* XTMP1 = W[-15] */
XORL a, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL g, R15 \ // y2 = CH = ((f^g)&e)^g
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+48(FP), R15 \ // y2 = k + w + S1 + CH
MOVL a, R13 \ // y0 = a
ADDL R15, h \ // h = h + S1 + CH + k + w
\ // ROTATE_ARGS
MOVL a, R15 \ // y2 = a
MOVOU X1, X2 \
LONG $0xd2720f66; BYTE $0x07 \ // PSRLD XMM2,0x7 /* */
ORL c, R13 \ // y0 = a|c
ADDL h, d \ // d = d + h + S1 + CH + k + w
ANDL c, R15 \ // y2 = a&c
MOVOU X1, X3 \
LONG $0xf3720f66; BYTE $0x19 \ // PSLLD XMM3,0x19 /* */
ANDL b, R13 \ // y0 = (a|c)&b
ADDL R14, h \ // h = h + S1 + CH + k + w + S0
LONG $0xdaeb0f66 \ // POR XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, h \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL d, R13 \ // y0 = e
MOVL h, R14 \ // y1 = a
ROLL $18, R13 \ // y0 = e >> (25-11)
XORL d, R13 \ // y0 = e ^ (e >> (25-11))
MOVL e, R15 \ // y2 = f
ROLL $23, R14 \ // y1 = a >> (22-13)
MOVOU X1, X2 \
LONG $0xd2720f66; BYTE $0x12 \ // PSRLD XMM2,0x12 /* */
XORL h, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL f, R15 \ // y2 = f^g
MOVOU X1, X8 \
LONG $0x720f4166; WORD $0x03d0 \ // PSRLD XMM8,0x3 /* XTMP4 = W[-15] >> 3 */
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
XORL d, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL d, R15 \ // y2 = (f^g)&e
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
LONG $0xf1720f66; BYTE $0x0e \ // PSLLD XMM1,0xe /* */
XORL h, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
XORL f, R15 \ // y2 = CH = ((f^g)&e)^g
LONG $0xd9ef0f66 \ // PXOR XMM3,XMM1 /* */
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+52(FP), R15 \ // y2 = k + w + S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
LONG $0xdaef0f66 \ // PXOR XMM3,XMM2 /* XTMP1 = W[-15] MY_ROR 7 ^ W[-15] MY_ROR */
MOVL h, R13 \ // y0 = a
ADDL R15, g \ // h = h + S1 + CH + k + w
MOVL h, R15 \ // y2 = a
MOVOU X3, X1 \
LONG $0xef0f4166; BYTE $0xc8 \ // PXOR XMM1,XMM8 /* XTMP1 = s0 */
ORL b, R13 \ // y0 = a|c
ADDL g, c \ // d = d + h + S1 + CH + k + w
ANDL b, R15 \ // y2 = a&c
\
\ // compute low s1
\
LONG $0xd7700f66; BYTE $0xfa \ // PSHUFD XMM2,XMM7,0xfa /* XTMP2 = W[-2] {BBAA} */
ANDL a, R13 \ // y0 = (a|c)&b
ADDL R14, g \ // h = h + S1 + CH + k + w + S0
LONG $0xc1fe0f66 \ // PADDD XMM0,XMM1 /* XTMP0 = W[-16] + W[-7] + s0 */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, g \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL c, R13 \ // y0 = e
MOVL g, R14 \ // y1 = a
ROLL $18, R13 \ // y0 = e >> (25-11)
XORL c, R13 \ // y0 = e ^ (e >> (25-11))
ROLL $23, R14 \ // y1 = a >> (22-13)
MOVL d, R15 \ // y2 = f
XORL g, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
MOVOU X2, X8 \
LONG $0x720f4166; WORD $0x0ad0 \ // PSRLD XMM8,0xa /* XTMP4 = W[-2] >> 10 {BBAA} */
XORL e, R15 \ // y2 = f^g
MOVOU X2, X3 \
LONG $0xd3730f66; BYTE $0x13 \ // PSRLQ XMM3,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xBxA} */
XORL c, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL c, R15 \ // y2 = (f^g)&e
LONG $0xd2730f66; BYTE $0x11 \ // PSRLQ XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xBxA} */
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
XORL g, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
XORL e, R15 \ // y2 = CH = ((f^g)&e)^g
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
LONG $0xd3ef0f66 \ // PXOR XMM2,XMM3 /* */
ADDL R13, R15 \ // y2 = S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL _xfer+56(FP), R15 \ // y2 = k + w + S1 + CH
LONG $0xef0f4466; BYTE $0xc2 \ // PXOR XMM8,XMM2 /* XTMP4 = s1 {xBxA} */
MOVL g, R13 \ // y0 = a
ADDL R15, f \ // h = h + S1 + CH + k + w
MOVL g, R15 \ // y2 = a
LONG $0x380f4566; WORD $0xc200 \ // PSHUFB XMM8,XMM10 /* XTMP4 = s1 {00BA} */
ORL a, R13 \ // y0 = a|c
ADDL f, b \ // d = d + h + S1 + CH + k + w
ANDL a, R15 \ // y2 = a&c
LONG $0xfe0f4166; BYTE $0xc0 \ // PADDD XMM0,XMM8 /* XTMP0 = {..., ..., W[1], W[0]} */
ANDL h, R13 \ // y0 = (a|c)&b
ADDL R14, f \ // h = h + S1 + CH + k + w + S0
\
\ // compute high s1
\
LONG $0xd0700f66; BYTE $0x50 \ // PSHUFD XMM2,XMM0,0x50 /* XTMP2 = W[-2] {DDCC} */
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, f \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
MOVL b, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL f, R14 \ // y1 = a
ROLL $23, R14 \ // y1 = a >> (22-13)
XORL b, R13 \ // y0 = e ^ (e >> (25-11))
MOVL c, R15 \ // y2 = f
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
MOVOU X2, X11 \
LONG $0x720f4166; WORD $0x0ad3 \ // PSRLD XMM11,0xa /* XTMP5 = W[-2] >> 10 {DDCC} */
XORL f, R14 \ // y1 = a ^ (a >> (22-13)
XORL d, R15 \ // y2 = f^g
MOVOU X2, X3 \
LONG $0xd3730f66; BYTE $0x13 \ // PSRLQ XMM3,0x13 /* XTMP3 = W[-2] MY_ROR 19 {xDxC} */
XORL b, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ANDL b, R15 \ // y2 = (f^g)&e
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
LONG $0xd2730f66; BYTE $0x11 \ // PSRLQ XMM2,0x11 /* XTMP2 = W[-2] MY_ROR 17 {xDxC} */
XORL f, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL d, R15 \ // y2 = CH = ((f^g)&e)^g
LONG $0xd3ef0f66 \ // PXOR XMM2,XMM3 /* */
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL R13, R15 \ // y2 = S1 + CH
ADDL _xfer+60(FP), R15 \ // y2 = k + w + S1 + CH
LONG $0xef0f4466; BYTE $0xda \ // PXOR XMM11,XMM2 /* XTMP5 = s1 {xDxC} */
MOVL f, R13 \ // y0 = a
ADDL R15, e \ // h = h + S1 + CH + k + w
MOVL f, R15 \ // y2 = a
LONG $0x380f4566; WORD $0xdc00 \ // PSHUFB XMM11,XMM12 /* XTMP5 = s1 {DC00} */
ORL h, R13 \ // y0 = a|c
ADDL e, a \ // d = d + h + S1 + CH + k + w
ANDL h, R15 \ // y2 = a&c
MOVOU X11, X4 \
LONG $0xe0fe0f66 \ // PADDD XMM4,XMM0 /* X0 = {W[3], W[2], W[1], W[0]} */
ANDL g, R13 \ // y0 = (a|c)&b
ADDL R14, e \ // h = h + S1 + CH + k + w + S0
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, e \ // h = h + S1 + CH + k + w + S0 + MAJ
\ // ROTATE_ARGS
ROTATE_XS
#define DO_ROUND(a, b, c, d, e, f, g, h, offset) \
MOVL e, R13 \ /* y0 = e */
ROLL $18, R13 \ /* y0 = e >> (25-11) */
MOVL a, R14 \ /* y1 = a */
XORL e, R13 \ /* y0 = e ^ (e >> (25-11)) */
ROLL $23, R14 \ /* y1 = a >> (22-13) */
MOVL f, R15 \ /* y2 = f */
XORL a, R14 \ /* y1 = a ^ (a >> (22-13) */
ROLL $27, R13 \ /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */
XORL g, R15 \ /* y2 = f^g */
XORL e, R13 \ /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */
ROLL $21, R14 \ /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */
ANDL e, R15 \ /* y2 = (f^g)&e */
XORL a, R14 \ /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */
ROLL $26, R13 \ /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */
XORL g, R15 \ /* y2 = CH = ((f^g)&e)^g */
ADDL R13, R15 \ /* y2 = S1 + CH */
ROLL $30, R14 \ /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */
ADDL _xfer+offset(FP), R15 \ /* y2 = k + w + S1 + CH */
MOVL a, R13 \ /* y0 = a */
ADDL R15, h \ /* h = h + S1 + CH + k + w */
MOVL a, R15 \ /* y2 = a */
ORL c, R13 \ /* y0 = a|c */
ADDL h, d \ /* d = d + h + S1 + CH + k + w */
ANDL c, R15 \ /* y2 = a&c */
ANDL b, R13 \ /* y0 = (a|c)&b */
ADDL R14, h \ /* h = h + S1 + CH + k + w + S0 */
ORL R15, R13 \ /* y0 = MAJ = (a|c)&b)|(a&c) */
ADDL R13, h /* h = h + S1 + CH + k + w + S0 + MAJ */
MOVL e, R13 \ // y0 = e
ROLL $18, R13 \ // y0 = e >> (25-11)
MOVL a, R14 \ // y1 = a
XORL e, R13 \ // y0 = e ^ (e >> (25-11))
ROLL $23, R14 \ // y1 = a >> (22-13)
MOVL f, R15 \ // y2 = f
XORL a, R14 \ // y1 = a ^ (a >> (22-13)
ROLL $27, R13 \ // y0 = (e >> (11-6)) ^ (e >> (25-6))
XORL g, R15 \ // y2 = f^g
XORL e, R13 \ // y0 = e ^ (e >> (11-6)) ^ (e >> (25-6))
ROLL $21, R14 \ // y1 = (a >> (13-2)) ^ (a >> (22-2))
ANDL e, R15 \ // y2 = (f^g)&e
XORL a, R14 \ // y1 = a ^ (a >> (13-2)) ^ (a >> (22-2))
ROLL $26, R13 \ // y0 = S1 = (e>>6) & (e>>11) ^ (e>>25)
XORL g, R15 \ // y2 = CH = ((f^g)&e)^g
ADDL R13, R15 \ // y2 = S1 + CH
ROLL $30, R14 \ // y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22)
ADDL _xfer+offset(FP), R15 \ // y2 = k + w + S1 + CH
MOVL a, R13 \ // y0 = a
ADDL R15, h \ // h = h + S1 + CH + k + w
MOVL a, R15 \ // y2 = a
ORL c, R13 \ // y0 = a|c
ADDL h, d \ // d = d + h + S1 + CH + k + w
ANDL c, R15 \ // y2 = a&c
ANDL b, R13 \ // y0 = (a|c)&b
ADDL R14, h \ // h = h + S1 + CH + k + w + S0
ORL R15, R13 \ // y0 = MAJ = (a|c)&b)|(a&c)
ADDL R13, h // h = h + S1 + CH + k + w + S0 + MAJ
// func blockSsse(h []uint32, message []uint8, reserved0, reserved1, reserved2, reserved3 uint64)
TEXT ·blockSsse(SB), 7, $0
MOVQ h+0(FP), SI // SI: &h
MOVQ message+24(FP), R8 // &message
MOVQ lenmessage+32(FP), R9 // length of message
CMPQ R9, $0
JEQ done_hash
ADDQ R8, R9
MOVQ R9, _inp_end+64(FP) // store end of message
MOVQ h+0(FP), SI // SI: &h
MOVQ message+24(FP), R8 // &message
MOVQ lenmessage+32(FP), R9 // length of message
CMPQ R9, $0
JEQ done_hash
ADDQ R8, R9
MOVQ R9, _inp_end+64(FP) // store end of message
// Register definition
// a --> eax
// b --> ebx
// c --> ecx
// d --> r8d
// e --> edx
// f --> r9d
// g --> r10d
// h --> r11d
//
// y0 --> r13d
// y1 --> r14d
// y2 --> r15d
// Register definition
// a --> eax
// b --> ebx
// c --> ecx
// d --> r8d
// e --> edx
// f --> r9d
// g --> r10d
// h --> r11d
//
// y0 --> r13d
// y1 --> r14d
// y2 --> r15d
MOVL (0*4)(SI), AX // a = H0
MOVL (1*4)(SI), BX // b = H1
MOVL (2*4)(SI), CX // c = H2
MOVL (3*4)(SI), R8 // d = H3
MOVL (4*4)(SI), DX // e = H4
MOVL (5*4)(SI), R9 // f = H5
MOVL (6*4)(SI), R10 // g = H6
MOVL (7*4)(SI), R11 // h = H7
MOVL (0*4)(SI), AX // a = H0
MOVL (1*4)(SI), BX // b = H1
MOVL (2*4)(SI), CX // c = H2
MOVL (3*4)(SI), R8 // d = H3
MOVL (4*4)(SI), DX // e = H4
MOVL (5*4)(SI), R9 // f = H5
MOVL (6*4)(SI), R10 // g = H6
MOVL (7*4)(SI), R11 // h = H7
MOVOU bflipMask<>(SB), X13
MOVOU shuf00BA<>(SB), X10 // shuffle xBxA -> 00BA
MOVOU shufDC00<>(SB), X12 // shuffle xDxC -> DC00
MOVOU shuf00BA<>(SB), X10 // shuffle xBxA -> 00BA
MOVOU shufDC00<>(SB), X12 // shuffle xDxC -> DC00
MOVQ message+24(FP), SI // SI: &message
MOVQ message+24(FP), SI // SI: &message
loop0:
LEAQ constants<>(SB), BP
// byte swap first 16 dwords
MOVOU 0*16(SI), X4
LONG $0x380f4166; WORD $0xe500 // PSHUFB XMM4, XMM13
MOVOU 1*16(SI), X5
LONG $0x380f4166; WORD $0xed00 // PSHUFB XMM5, XMM13
MOVOU 2*16(SI), X6
LONG $0x380f4166; WORD $0xf500 // PSHUFB XMM6, XMM13
MOVOU 3*16(SI), X7
LONG $0x380f4166; WORD $0xfd00 // PSHUFB XMM7, XMM13
MOVOU 0*16(SI), X4
LONG $0x380f4166; WORD $0xe500 // PSHUFB XMM4, XMM13
MOVOU 1*16(SI), X5
LONG $0x380f4166; WORD $0xed00 // PSHUFB XMM5, XMM13
MOVOU 2*16(SI), X6
LONG $0x380f4166; WORD $0xf500 // PSHUFB XMM6, XMM13
MOVOU 3*16(SI), X7
LONG $0x380f4166; WORD $0xfd00 // PSHUFB XMM7, XMM13
MOVQ SI, _inp+72(FP)
MOVD $0x3, DI
MOVQ SI, _inp+72(FP)
MOVD $0x3, DI
// Align
// nop WORD PTR [rax+rax*1+0x0]
// Align
// nop WORD PTR [rax+rax*1+0x0]
// schedule 48 input dwords, by doing 3 rounds of 16 each
loop1:
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x004d // PADDD XMM9, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x004d // PADDD XMM9, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x104d // PADDD XMM9, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x104d // PADDD XMM9, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x204d // PADDD XMM9, 32[RBP] /* Add 3rd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x204d // PADDD XMM9, 32[RBP] /* Add 3rd constant to message */
MOVOU X9, _xfer+48(FP)
FOUR_ROUNDS_AND_SCHED(AX, BX, CX, R8, DX, R9, R10, R11)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x304d // PADDD XMM9, 48[RBP] /* Add 4th constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $64, BP
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x304d // PADDD XMM9, 48[RBP] /* Add 4th constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $64, BP
FOUR_ROUNDS_AND_SCHED(DX, R9, R10, R11, AX, BX, CX, R8)
SUBQ $1, DI
JNE loop1
SUBQ $1, DI
JNE loop1
MOVD $0x2, DI
MOVD $0x2, DI
loop2:
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x004d // PADDD XMM9, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
DO_ROUND( AX, BX, CX, R8, DX, R9, R10, R11, 48)
DO_ROUND(R11, AX, BX, CX, R8, DX, R9, R10, 52)
DO_ROUND(R10, R11, AX, BX, CX, R8, DX, R9, 56)
DO_ROUND( R9, R10, R11, AX, BX, CX, R8, DX, 60)
MOVOU X4, X9
LONG $0xfe0f4466; WORD $0x004d // PADDD XMM9, 0[RBP] /* Add 1st constant to first part of message */
MOVOU X9, _xfer+48(FP)
DO_ROUND( AX, BX, CX, R8, DX, R9, R10, R11, 48)
DO_ROUND(R11, AX, BX, CX, R8, DX, R9, R10, 52)
DO_ROUND(R10, R11, AX, BX, CX, R8, DX, R9, 56)
DO_ROUND( R9, R10, R11, AX, BX, CX, R8, DX, 60)
MOVOU X5, X9
LONG $0xfe0f4466; WORD $0x104d // PADDD XMM9, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $32, BP
DO_ROUND( DX, R9, R10, R11, AX, BX, CX, R8, 48)
DO_ROUND( R8, DX, R9, R10, R11, AX, BX, CX, 52)
DO_ROUND( CX, R8, DX, R9, R10, R11, AX, BX, 56)
DO_ROUND( BX, CX, R8, DX, R9, R10, R11, AX, 60)
MOVOU X5, X9
LONG $0xfe0f4466; WORD $0x104d // PADDD XMM9, 16[RBP] /* Add 2nd constant to message */
MOVOU X9, _xfer+48(FP)
ADDQ $32, BP
DO_ROUND( DX, R9, R10, R11, AX, BX, CX, R8, 48)
DO_ROUND( R8, DX, R9, R10, R11, AX, BX, CX, 52)
DO_ROUND( CX, R8, DX, R9, R10, R11, AX, BX, 56)
DO_ROUND( BX, CX, R8, DX, R9, R10, R11, AX, 60)
MOVOU X6, X4
MOVOU X7, X5
MOVOU X6, X4
MOVOU X7, X5
SUBQ $1, DI
JNE loop2
SUBQ $1, DI
JNE loop2
MOVQ h+0(FP), SI // SI: &h
ADDL (0*4)(SI), AX // H0 = a + H0
MOVL AX, (0*4)(SI)
ADDL (1*4)(SI), BX // H1 = b + H1
MOVL BX, (1*4)(SI)
ADDL (2*4)(SI), CX // H2 = c + H2
MOVL CX, (2*4)(SI)
ADDL (3*4)(SI), R8 // H3 = d + H3
MOVL R8, (3*4)(SI)
ADDL (4*4)(SI), DX // H4 = e + H4
MOVL DX, (4*4)(SI)
ADDL (5*4)(SI), R9 // H5 = f + H5
MOVL R9, (5*4)(SI)
ADDL (6*4)(SI), R10 // H6 = g + H6
MOVL R10, (6*4)(SI)
ADDL (7*4)(SI), R11 // H7 = h + H7
MOVL R11, (7*4)(SI)
MOVQ h+0(FP), SI // SI: &h
ADDL (0*4)(SI), AX // H0 = a + H0
MOVL AX, (0*4)(SI)
ADDL (1*4)(SI), BX // H1 = b + H1
MOVL BX, (1*4)(SI)
ADDL (2*4)(SI), CX // H2 = c + H2
MOVL CX, (2*4)(SI)
ADDL (3*4)(SI), R8 // H3 = d + H3
MOVL R8, (3*4)(SI)
ADDL (4*4)(SI), DX // H4 = e + H4
MOVL DX, (4*4)(SI)
ADDL (5*4)(SI), R9 // H5 = f + H5
MOVL R9, (5*4)(SI)
ADDL (6*4)(SI), R10 // H6 = g + H6
MOVL R10, (6*4)(SI)
ADDL (7*4)(SI), R11 // H7 = h + H7
MOVL R11, (7*4)(SI)
MOVQ _inp+72(FP), SI
MOVQ _inp+72(FP), SI
ADDQ $64, SI
CMPQ _inp_end+64(FP), SI
JNE loop0
JNE loop0
done_hash:
RET
RET
// Constants table
DATA constants<>+0x0(SB)/8, $0x71374491428a2f98

View File

@ -22,3 +22,4 @@ func blockArmGo(dig *digest, p []byte) {}
func blockAvx2Go(dig *digest, p []byte) {}
func blockAvxGo(dig *digest, p []byte) {}
func blockSsseGo(dig *digest, p []byte) {}
func blockShaGo(dig *digest, p []byte) {}

View File

@ -46,3 +46,8 @@ func blockSsseGo(dig *digest, p []byte) {
dig.h[0], dig.h[1], dig.h[2], dig.h[3], dig.h[4], dig.h[5], dig.h[6], dig.h[7] = h[0], h[1], h[2], h[3], h[4], h[5], h[6], h[7]
}
func blockShaGo(dig *digest, p []byte) {
blockSha(&dig.h, p)
}

View File

@ -21,4 +21,5 @@ package sha256
func blockAvx2Go(dig *digest, p []byte) {}
func blockAvxGo(dig *digest, p []byte) {}
func blockSsseGo(dig *digest, p []byte) {}
func blockShaGo(dig *digest, p []byte) {}
func blockArmGo(dig *digest, p []byte) {}

View File

@ -21,6 +21,7 @@ package sha256
func blockAvx2Go(dig *digest, p []byte) {}
func blockAvxGo(dig *digest, p []byte) {}
func blockSsseGo(dig *digest, p []byte) {}
func blockShaGo(dig *digest, p []byte) {}
//go:noescape
func blockArm(h []uint32, message []uint8)

View File

@ -154,7 +154,6 @@ loop:
complete:
RET
// Constants table
DATA ·constants+0x0(SB)/8, $0x71374491428a2f98
DATA ·constants+0x8(SB)/8, $0xe9b5dba5b5c0fbcf

View File

@ -13,11 +13,12 @@
// limitations under the License.
//
// +build ppc64 ppc64le mips mipsle mips64 mips64le s390x
// +build ppc64 ppc64le mips mipsle mips64 mips64le s390x wasm
package sha256
func blockAvx2Go(dig *digest, p []byte) {}
func blockAvxGo(dig *digest, p []byte) {}
func blockSsseGo(dig *digest, p []byte) {}
func blockShaGo(dig *digest, p []byte) {}
func blockArmGo(dig *digest, p []byte) {}