import numpy as np
from functools import reduce
input = """1224346384
def flash(iter, takodachi, count, all_flash, flash_threshold=10):
takodachi += 1
width, height = takodachi.shape
old = set()
flash_mask = np.where(np.greater_equal(takodachi, flash_threshold))
d = list(set([(flash_mask[i], flash_mask[i]) for i in range(len(flash_mask))]).difference(old))
if len(d) > 0:
mask = (np.array([d[i] for i in range(len(d))]), np.array([d[i] for i in range(len(d))]))
old = old.union(set([(mask[i], mask[i]) for i in range(len(mask))]))
adjacents = 
for x in [-1, 0, 1]:
for y in [-1, 0, 1]:
if x != 0 or y != 0:
mx = mask + x
my = mask + y
keep = np.where((mx >= 0) & (mx < width) & (my >= 0) & (my < width))
for adjacents_mask in adjacents:
takodachi[adjacents_mask] += 1
d = list(old)
flashes = len(d)
mask = (np.array([d[i] for i in range(flashes)]), np.array([d[i] for i in range(flashes)]))
if flashes > 0:
takodachi[mask] = 0
if flashes == width * height:
all_flash.append(iter + 1)
return takodachi, flashes + count, all_flash
if __name__ == '__main__':
width = len(input.split("\n", 1))
n = np.array([c for c in input.replace('\n', '')], dtype=int)
tako = n.reshape(width, -1)
max_step = 500
tako, flashes, all_flashes = reduce(lambda acc, iter: flash(iter, acc, acc, acc), np.arange(max_step), (tako, 0, ))
print(tako, flashes, all_flashes)
Takeaways: (1) M1 Max is powerful, deserves its price and outperforms some more expensive ARM64 servers. (2) The Always Free tier available on Oracle cloud can provide ARM64 servers with decent performance.
Just a quick compilation test amongst Apple M1 Max, AWS c6g.2xlarge, c6g.metal and Oracle Ampere (VM.Standard.A1.Flex).
Hardware-wise, I'm using a MacBook Pro 14-inch with M1 Max (10 cores, 8 performance cores + 2 efficiency cores). The build is done in ARM64 docker with the arm64v8:ubuntu image. The docker engine can use 8 cores and 14 GB of RAM. It's worth noting that allocating 8 cores to the docker engine does not guarantee they are all performance cores. The core schedule is handled by macOS and there is no core pinning in recent macOS.
The hardware configuration on AWS c6g.2xlarge is just the stock one, which is 8 cores and 16 GB of RAM. The system image on the c6g.2xlarge machine is also ubuntu 20.04.
As for the Oracle Ampere (VM.Standard.A1.Flex), I tested three configurations:
4 CPUs, 24 GB of RAM
8 CPUs, 48 GB of RAM
16 CPUs, 96 GB of RAM
The first configuration is eligible for the Oracle Always Free Tier while the second configuration is meant to match the cores count with M1 Max and AWS c6g.2xlarge. The last one is the topped out spec (by default, but can increase the quota by upgrading to a paid account). The OS image used on these configurations is ubuntu 20.04 as well (image build is 2021.10.15-0).
M1 Max completed the compilation in ~28 minutes while it took ~45 minutes and xx minutes for AWS c6g.2xlarge and c6g.metal respectively. The Oracle Ampere machines finished in ~68 minutes (4c), ~42 minutes (8c) and (16c). The precise results are shown in the table below.
Compile Time (seconds)
MBP 14", M1 Max
One Time, ≥$2,499.00
GCC 11.2 compilation time on different machines.
As we can see that M1 Max is about 37.98% faster than the c6g.2xlarge machine.