I have stumbled across a post on mastodon. I never really took part in no trends, but this somehow had the right vibes, or maybe I changed? Dunno. The point is, I feel like I want to do it :)
So inspired by @eli_oat, with a whole webring going on, without further ado:
Day 21
Found a nasty solution to PEG first-matched-first-served problem. First of all, I changed statements from:
alt((
Statement::parse_comment,
Statement::parse_variable_definition,
Statement::parse_variable_assignment,
Statement::parse_output,
Expression::parse.map(|v| Statement::ExpressionStatement(v)),
))
to this:
seq!(
_: trace("skip_space_before", space0),
alt((
Statement::parse_comment_oneline,
seq!(
alt((
Statement::parse_comment_multiline,
Statement::parse_if,
Statement::parse_visible,
Statement::parse_variable_definition,
Statement::parse_variable_assignment,
Expression::parse.map(|v| Statement::ExpressionStatement(v)),
)),
_: trace("statement_ending", seq!(
_: space0,
_: alt((",", line_ending, eof))))
)
.map(|(statement,)| statement),
)),),
This way a statement is responsible for skipping preceeding whitespace, but most importantly – it’s also responsible to make sure it ends with comma ,
or newline \n
, or end-of-stream. Otherwise it’s not a valid statement. That takes care of matching NO
in NO WAI
as variable definition. Right now NO
is no longer a valid statement.
But that introduced another problem. Standalone keywords as OIC
or KTHXBYE
were technically valid statements, because they ended up with newlines or eof’s, so the Block
loop repeat
was matching them as variable definitions again. I think in LOLCODE it’s technically valid, because any expression that is not assigned to anything is automatically assigned to a default variable IT
. I dealt with it temporarily by validating that a name matched as variable definition is not one of the keywords:
trace(
"variable_name",
seq!((
take_while(1, AsIdentifierChar::is_identifier_start),
take_while(0.., AsIdentifierChar::is_identifier_next),
)),
)
.map(|(a, b)| format!("{}{}", a, b).to_string())
.verify(|v| matches!(v, "OIC" | "KTHXBYE") == false)
.parse_next(input)
This ain’t pretty but it works. Still, I hope I’ll figure out a cleaner solution eventually.
Day 20
Not much today. Spent most of my time re-doing my office. New floors, new furniture. IKEA nesting instict, but I must say the desk hack to use kitchen table as desk top rocks. Looks very nice, is solid, and cost like 25% of hardwood desk top. Finally starts to feel like something out of a fancy youtuber’s background ;>
Day 19
Inspired by @mcc@mastodon.social, I’ve decided to start a list of languages I am planning on exploring/refreshing next year, while catching up on Advent of Code that I sort of abandoned around day 06. And because everything needs to have a label or a hashtag these days, imma name it #BabelOfCode
– I am not going to rush like in Advent of Code, especially it’s not a competitive coding scenario. Every week I am going to take a puzzle from 2024 AoC pool, starting from Day 01, and every puzzle I am going to implement in a different langauge, or a flavor of a language, to explore the toolchain, its standard library and its idioms (i.e Pure Scheme vs Racket vs Common Lisp) or a platform (e.g. Intel vs ARM CPU or Linux vs Windows). After a bit of struggling over the past year, I think I have finally settled with Rust as my language of choice. It seems as a sane tradeoff between reason and passion. I always believed tho that one benefits greatly from learning different perspectives, so to broaden mine, I’ll do a bit of language tourism :)
Here’s the list of languages I am planning on covering. 25 weeks, 25 languages:
- Assembly (MASM, DOS)
- C (WATCOM, DOS)
- Uxntal
- Prolog
- AWK
- Fish
- Lua
- Garnet
- Scopes
- OCaml
- Haskell
- IO
- Common Lisp
- PureScript
- Fortran
- Forth
- COBOL
- Scala
- Oberon
Modula-2Occam – thx @neauoire for the tip.- ADA
- Raku
- Gleam
- Pharo
- J
I am going to refactor this to a separate post at some point when I start. Cheers. #BabelOfCode
Day 18
So I dived deeper into the bug I found yesterday. After fixing some low hanging fruits where I missed to consume whitespaces and newlines (oh, in moments like these I wish I had a lexer first:) I found an interesting bug, or rather a feature, in my PEG parser. I have a following block:
O RLY?
YA RLY, VISIBLE "J00 HAV A CAT"
NO WAI, VISIBLE "J00 SUX"
OIC
And the way parser goes is…tadum, tsss…
...
> block_statement_separator | "\nNO WAI, VISIBLE \"J00
> alt | "\nNO WAI, VISIBLE \"J00
> "," | "\nNO WAI, VISIBLE \"J00
< "," | backtrack
> line_ending | "\nNO WAI, VISIBLE \"J00
> alt | "\nNO WAI, VISIBLE \"J00
> "\n" | "\nNO WAI, VISIBLE \"J00
< "\n" | +1
< alt | +1
< line_ending | +1
< alt | +1
< block_statement_separator | +1
< terminated | +24
< block_statement | +24
> program_end | "NO WAI, VISIBLE \"J00 SU
> "KTHXBYE" | "NO WAI, VISIBLE \"J00 SU
< "KTHXBYE" | backtrack
< pr ogram_end | backtrack
> block_statement:1 | "NO WAI, VISIBLE \"J00 SU
...
> variable_reference | "NO WAI, VISIBLE \"J00 SU
> variable_name | "NO WAI, VISIBLE \"J00 SU
> | "NO WAI, VISIBLE \"J00 SU
> take_while | "NO WAI, VISIBLE \"J00 SU
< take_while | +1
> take_while | "O WAI, VISIBLE \"J00 SUX
< take_while | +1
< | +2
< variable_name | +2
< variable_reference | +2
< alt | +2
< expression | +2
< alt | +2
< statement | +2
< preceded:1 | +2
> space0 | " WAI, VISIBLE \"J00 SUX\
> take_while | " WAI, VISIBLE \"J00 SUX\
< take_while | +1
< space0 | +1
< terminated:1 | +3
> block_statement_separator:1 | "WAI, VISIBLE \"J00 SUX\"
> alt:1 | "WAI, VISIBLE \"J00 SUX\"
> "," | "WAI, VISIBLE \"J00 SUX\"
< "," | backtrack
> line_ending | "WAI, VISIBLE \"J00 SUX\"
> alt | "WAI, VISIBLE \"J00 SUX\"
> "\n" | "WAI, VISIBLE \"J00 SUX\"
< "\n" | backtrack
> "\r\n" | "WAI, VISIBLE \"J00 SUX\"
< "\r\n" | backtrack
< alt | backtrack
< line_ending | backtrack
< alt:1 | backtrack
< block_statement_separator:1 | backtrack
< terminated:1 | backtrack
< block_statement:1 | backtrack
< repeat_till | backtrack
< block | backtrack
< | backtrack
< ya_rly | backtrack
And here I am, making the very mistake I told during the talk to avoid. Currently my grammar looks like this:
impl Scanner for Statement {
fn parse(input: &mut &str) -> PResult<Statement> {
alt((
Statement::parse_if,
Statement::parse_comment,
Statement::parse_visible,
Statement::parse_variable_definition,
Statement::parse_variable_assignment,
Expression::parse.map(|v| Statement::ExpressionStatement(v)),
)).parse_next(input)
}
fn parse_if(input: &mut &str) -> PResult<Statement> {
"if",
seq!((
_: Caseless("O RLY?"), _: (till_line_ending, line_ending),
trace("ya_rly", seq!(_: Caseless("YA RLY,"),
_: space1,
Block::parse)),
opt(seq!(_: Caseless("MEBBE"), Expression::parse, _: delimited(space1, ", ", space1), Block::parse)),
opt(seq!(_: Caseless("NO WAI,"), _: space1, Block::parse)),
_: Caseless("OIC")
)),
}
}
type Block = Vec<Statement>;
impl Scanner for Expression {
fn parse(input: &mut &str) -> PResult<Expression> {
alt((
Expression::parse_unary_op,
Expression::parse_binary_op,
Expression::parse_variadic_op,
Expression::parse_literal,
Expression::parse_variable_reference,
)).parse_next(input)
}
}
Expression
parser is trying to parse NO
and identifies it as standalone variable reference, which should be terminated with ,
or \n
. But instead it finds WAI,
and backtracks the entire ya_rly
rule. At this point I am not really even sure how to refactor this, and running out of time for today. Will look at it tomorrow :/
Day 17
Not much done. Tried to wrap up IF
statement, or rather O RLY?
statement, but run into several issues with grammar. Looks easy, but need to prepare for tomorrow’s talk I voloutneered to give at work about parsing in Rust :) Given the issues with O RLY?
I am not sure anymore if I am the best person to give this talk :)
Day 16
Parsing day! But first a minor change, I’ve decided to move interpolate_string
to execution phase. This way, AST holds an original string, and only at execution I am going to interpolate it. And this has to do directly with the fact I don’t wanna deal with variable interpolation or breaking down the string into pieces. At execution interpolate_string
will get current environment information, and it’ll deal with variables directly.
The Scanner
trait works nice. Brings clarity to the types:
impl Scanner for Statement {
type Item = Statement;
fn parse(input: &mut &str) -> PResult<Self::Item> {
trace(
"statement",
alt((
Statement::parse_comment,
Statement::parse_variable_definition,
Statement::parse_variable_assignment,
Statement::parse_output,
Expression::parse.map(|v| Statement::ExpressionStatement(v)),
)),
)
.parse_next(input)
}
}
I’m still kind of thinking how to solve parsing statements. LOLCODE has 3 gotchas. Statements are separated by the new line \n
, or a comma ,
. But new line may be broken into another line with ...
. Oh, and there’s single-line comment that does not really need a comma, and both statements are valid:
VAR R 10, BTW assign
VAR R 10 BTW assign
The way I have the grammar of statements works, but it’s ugly:
let (statements, _): (Block, _) = trace(
"program",
repeat_till(
0..,
trace(
"program_statement",
terminated(
terminated(preceded(space0, Statement::parse), space0),
trace("command_separator", alt((",", line_ending))),
),
),
Program::parse_program_end,
),
)
.parse_next(&mut source)?;
Will need to refactor this once I get more going, for now I am going to add parsing of flow control, and I think I’ll move to execution and actually interpreting the script.
Day 15
Literally got like 20 minutes of time, at 11:30pm. After all those refactorings it’s time to put this all together. And in order to do this, I need a stack info. Or environment how I chose to call it. So the first draft looks like this:
#[derive(Debug, Clone)]
pub struct Environment {
variables: HashMap<String, Value>,
functions: HashMap<String, (Vec<String>, Block)>,
}
impl Default for Environment {
fn default() -> Self {
Self {
variables: HashMap::new(),
functions: HashMap::new(),
}
}
}
impl Environment {
pub fn get_variable(&self, name: &str) -> Option<&Value> {
self.variables.get(name)
}
pub fn set_variable(&mut self, name: &str, val: Value) {
self.variables.insert(name.to_string(), val);
}
pub fn set_function(&mut self, name: &str, params: Vec<String>, body: Block) {
self.functions.insert(name.to_string(), (params, body));
}
pub fn get_function(&self, name: &str) -> Option<&(Vec<String>, Block)> {
self.functions.get(name)
}
}
While I usually don’t like having setters and getters, but I do not plan on allowing to take (move) values from environment, so it actually might be a good idea. But looking at this, I will need an explicit type for functions for sure.
Day 14
Yet another refactoring day. After going bottom up with the parsers, I have done some top-down decision on how the enums will look like, so they represent the actual grammar. Not in a PEG sense, but more as an AST. I’ve added a Scanner
trait (to avoid clash with winnow::Parser
. I don’t really want to implement the full parser trait) to move all the parsing as impl
on the types themselves:
pub trait Scanner {
type Item;
fn parse(input: &mut &str) -> PResult<Self::Item>;
}
I started moved parsing of Value
to the new model, and also added From
and Display
traits. I had to break down Operations
into BinaryOp
and UnaryOp
. Also cleaned up what goes into Statement
and Expression
types a little. Now I will need to implement all that.
Day 13
Friday 13th. What can go wrong? :)
Did some updates to my home page. Added recent publication, and the panel discussion. And figured I might as well do some long outstanding fixes :)
First of all, I’ve finally applied styling to <code>
based on Izzy’s Casa. There’s a recent article on the state of C++ Committee rot, and as good as the article was, couple of the styling tricks stole my heart therefore I am stealing them :)
I had to experiment a little with it in my setup, because <code>
is used for inline-backticked verbatim code, as well as code blocks. I didn’t want to mess with background and shadows for the latter, so I had to find a way to leave them alone. Fortunately they all have a class
attribute starting with language-
, like class="language-rust"
. From there it was realtively easy, once I brushed up on CSS selectors:
$box-shadow-xsmall: 2px 1px 0px 0px $code-foreground-color;
$border-small: 1px solid $code-foreground-color;
code {
&:not([class^="language-"]) {
background-color: $code-background-color;
color: $code-foreground-color;
border: $border-small;
box-shadow: $box-shadow-xsmall;
line-height: 1.2;
margin-left: 1px;
margin-right: 2px;
padding: 0.1rem 0.2rem;
vertical-align: 10%;
}
font-family: $font-family-code;
font-optical-sizing: auto;
font-style: normal;
font-weight: 500;
font-size: 0.95rem;
}
I’ve also added a cool decay effect on pictures, that I saw in an article Century-Scale Storage by Maxwell Neely-Cohen. It’s a great piece on so many levels. First of all it touches a subject dear to my heart for the past few years, ever since I started the UNESCO journey which is preservation of digital heritage. But also because it was quite neat on the presentation layer. There’s many custom shaders, custom p5 animations, and this one effect that wen you scroll, the image seems to vanish in digital noise.
The effect is quite easy actually to pull of. First you need a canvas layer overlay fixed at the top:
<canvas
id="decay-canvas"
width="3172"
height="440"
style="height: 220px; width: 1586px"
></canvas>
#decay-canvas {
position: fixed;
top: 0;
left: 0;
width: 100%;
z-index: 100;
}
then you need all your text to have z-index
higher than the overlay and your images. The images stay below, that way the overlay does seem to impact only the images. Then you need a little script like this:
function decay() {
const canvas = document.getElementById("decay-canvas");
const pixelSize = window.innerWidth > 1600 ? 8 : 6;
canvas.width = window.innerWidth * 2;
canvas.height = 440;
canvas.style.height = canvas.height / 2 + "px";
canvas.style.width = canvas.width / 2 + "px";
const ctx = canvas.getContext("2d");
ctx.fillStyle = "#fff";
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (let j = 0; j < canvas.height; j += pixelSize) {
const probability = Math.pow(
(canvas.height - 1 - j) / (canvas.height - 1),
3,
);
for (let i = 0; i < canvas.width; i += pixelSize) {
if (Math.random() < probability) {
ctx.fillRect(i, j, pixelSize, pixelSize);
}
}
}
The coders behind the article were nice enough to include one little trick I have not seen before. Which proves nothing, because I am not a front-end person, but it was interesting to me. The trick is to fetch the user preferences regarding reduced motion. It is something available via css media queries like this:
@media (prefers-reduced-motion) {
/* styles to apply if a user's device settings are set to reduced motion */
}
I have not personally used that one. In media queries I really only used resolution so far, but also I didn’t know that it’s easy to fetch them from javascript. And since it is, it’s really a kind thing to do for your viewers:
window.isReducedMotion = () =>
window.matchMedia(`(prefers-reduced-motion: reduce)`) === true ||
window.matchMedia(`(prefers-reduced-motion: reduce)`).matches === true;
window.onload = () => {
if (!isReducedMotion()) {
decay();
}
document.addEventListener("resize", () => {
if (!isReducedMotion()) {
decay();
}
});
document.addEventListener("scroll", () => {
if (!isReducedMotion()) {
decay();
}
});
};
Day 12
Today I was traveling to Warsaw, for a conference called 12th Machine Intelligence and Digital Interaction. I had one paper accepted for it, but also the organizers were kind enough to invite me for a panel discussion on digital heritage in architecture and arts.
Met couple cool people including professors Wiesław Kopeć from XR Center at PJAT and Władek Fuchs from Volterra-Detroit Foundation.
I have also spent a great time with my favorite man behind The Foundation for the History of Home Computers - Maciej Grzeszczuk, and the remaining 0.33 of our lil’ informal research team – Kinga Skorupska. Love those guys, srsly. Coming back with at least 2 great ideas. Must. Not. Distract. :)
Train back home was announced to be 15 minutes late, but eventually came after 45. Disaster.
Day 11
Not much time today. Renamed Token
to Node
. I was thinking of naming it Expression
but it didn’t sit right, as some of the enum variants were in reality just statements…
I’ve also added a simple Node
wrapper for Value
, that I actually hate, as it looks ugly, and I am going to refactor it soon hopefully. For now it’s enough tho:
pub(crate) fn parse_value(input: &mut &str) -> PResult<Node> {
trace("value", Value::parse)
.map(|v| Node::Value(v.unwrap()))
.parse_next(input)
}
And that should enable working on actual expressions, operators and function calls.
Day 10
A little bit more of refactoring. Decided to pull parsing functions into the type for both Value
and Token
:
impl Value {
pub(crate) fn parse(input: &mut &str) -> PResult<Option<Self>> {
trace(
"value",
alt((
Self::parse_float_dot,
Self::parse_uint,
Self::parse_string,
Self::parse_boolean,
Self::parse_undefined,
)),
)
.map(|v| Ok(v))
.parse_next(input)?
}
// ...
}
Day 09
Had to do a little bit of refactoring. I never understood why some Rust library authors prefer to stuff 100k of code straight into lib.rs
. It always felt weird, and unsettling. I like my files broken down. So when my interpreter started to approach 500 LOC, I had to split it into several files.
Now that I have proper values, as well as variables that can hold values, I figured I need a trait for things that can be turned into value. In the mean time, I had to look up the syntax for matching structure enums… that’s definietely not overly intuitive, but looks pretty after a while:
impl ToValue for Token {
fn to_value(&self) -> &Value {
match self {
Token::Variable {
value: Some(ref v), ..
} => v,
_ => unreachable!(),
}
}
}
Day 08
A day of string interpolation. In the meantime, I’ve learned that Rust does not understand \g
. But maybe that’s for the better… who’d use that in 2024 anyway?
fn test_string_interpolate() {
assert_eq!(interpolate_from(":)"), "\n".to_string());
assert_eq!(interpolate_from(":):>"), "\n\t".to_string());
assert_eq!(interpolate_from(":o"), "\x07".to_string());
assert_eq!(interpolate_from(":\""), "\"".to_string());
assert_eq!(interpolate_from("::"), ":".to_string());
assert_eq!(interpolate_from(":(70)"), "p".to_string());
assert_eq!(interpolate_from(":(70):(6F)"), "po".to_string());
assert_eq!(
interpolate_from(":[black star]:[snowman]:[black star]"),
"★☃★".to_string()
);
assert_eq!(
interpolate_from(":):(70):::>:[black star]:(6F):[snowman]::"),
"\np:\t★o☃:"
)
Day 07
A little context switch again. Riddle for day four of advent code bothered me for some time now. It was easy to solve it in imperative way, but I really wanted to solve it in racket in a way that will be generic for both parts.
So first we define the patterns. I wanted to describe both searches in a purely declarative manner:
(define part-1-pattern '((#\X "XMAS"
(((0 0) (0 1) (0 2) (0 3))
((0 0) (1 0) (2 0) (3 0))
((0 0) (1 1) (2 2) (3 3))
((0 0) (1 -1) (2 -2) (3 -3))
((0 0) (0 -1) (0 -2) (0 -3))
((0 0) (-1 0) (-2 0) (-3 0))
((0 0) (-1 -1) (-2 -2) (-3 -3))
((0 0) (-1 1) (-2 2) (-3 3))))))
(define part-2-pattern '(
(#\A "MASMAS" (((-1 -1) (0 0) (1 1) (-1 1) (0 0) (1 -1))))
(#\A "MASSAM" (((-1 -1) (0 0) (1 1) (-1 1) (0 0) (1 -1))))
(#\A "SAMMAS" (((-1 -1) (0 0) (1 1) (-1 1) (0 0) (1 -1))))))
Then some helpers, to get the size of the grid…
(define (grid-size grid)
(list (string-length (car grid)) (length grid)))
… to check if coordinates are valid…
(define (coord-valid? xy height width)
(not (or (< (car xy) 0)
(>= (car xy) width)
(< (cadr xy) 0)
(>= (cadr xy) height))))
… and finally to get a character under at the coords, or false
if coords are out of bounds.
(define (coord-get grid xy width height)
(if (coord-valid? xy width height)
(string-ref (list-ref grid (car xy)) (cadr xy))
#f))
Next, I loop through the patterns. For every pattern I try to get the characters under the coords. I further process only the strings that are fully fetched – i.e. they have as many characters as coords in the pattern – and I return the list with the strings for each pattern matched.
(define (patterns->characters grid xy width height patterns)
(let* ([valid? #t]
[result '()]
[bind-coord-get (lambda (bind-xy)
(coord-get grid
(list (+ (car xy) (car bind-xy))
(+ (cadr xy) (cadr bind-xy)))
width
height))])
(define (pattern->characters pattern)
(let* ([result (filter-map bind-coord-get pattern)]
[pattern-len (length pattern)]
[result-len (length result)])
(if (equal? pattern-len result-len) result #f)))
(map list->string (filter-map pattern->characters patterns))))
And here’s the main loop. Going through the grid, I look for the character defined in the pattern. If I find one, I get all the strings defined by the pattern coords. Then I sum how many of those strings I’ve found actually matched a given pattern.
(define (solve-pattern grid pattern)
(let* ([result '()]
[debug-result '()]
[size (grid-size grid)]
[width (car size)]
[height (cadr size)]
[search-char (car pattern)]
[search-string (cadr pattern)]
[search-pattern (caddr pattern)]
[is-search-string?
(lambda (x)
(or (equal? x search-string)
(equal? (list->string (reverse (string->list x)))
search-string)))])
(for* ([x (in-range height)]
[y (in-range width)])
(when (char=? search-char (coord-get grid (list x y) width height))
(set!
result
(cons (length (filter identity
(map is-search-string?
(patterns->characters grid
(list x y)
width
height
search-pattern))))
result))))
(apply + result)))
Finally I sum all the sums, to get the final hit number. In the begining, this was only one loop. The core function was the one above, but I didn’t find a feasable way to get Part 2 working with one search string. That’s why eventually the pattern became a list of patterns.
(define (solve grid patterns)
(apply + (map (lambda (x) (solve-pattern grid x)) patterns)))
And the formalities, print out the answers.
(define (main)
(let* ([input (port->string (open-input-file "day04.txt") #:close? #t)]
[grid (string-split input #px"\n")])
(printf "Total XMAS: ~a\n" (solve grid part-1-pattern))
(printf "Total X-MAS: ~a\n" (solve grid part-2-pattern))))
(main)
It ain’t pretty. Functional mindset was a long dream of mine, but it is apparent that it is hard for me to unlearn imperative ways.
Day 06
Not much time today either. Work, Xmas Party afterhours, and setting up cookies and milk for Santa with the kids, but also a trap to catch him in the act :)
Still I managed to add parsing for following sections of 1.2 spec:
- File Creation
- Comments
- Variables
More updates tomorrow.
Day 05
Today was a busy day, couldn’t spend too much time off work. But did manage to think through, or at least to take a stab at the last missing piece I think I need, which is:
File I/O
- Open a file
I HAS A <file_handle> ITZ SHIP "<filename>" WIF "<mode>"
Mode can be:
SNAGGING
: read modeSLAPIN
: write mode, truncates fileADDIN
: append mode
all file input/output is done assuming text-mode. No support for binary yet.
- Read from a file
SLURP <variable> OFFA <file_handle>
- Write to a file
SCOOP <data> IN2 <file_handle>
The command VISIBLE <data>
is a backwards compatible equivalent of SCOOP <data> IN2 MAHOUT
.
- Close the file
UNSHIP <file_handle>
- Predefined file handles
As in many systems, LULZ comes with predefined handles for standard streams;
- stdio is
MAHIN
- stdout is
MAHOUT
- stderr is
MAHBAD
you can use them in place of file_handle
Day 04
Oh dear attention deficit… today isn’t my day with Scheme. To be honest, it’s one of those days where everything I touch turns to sh*t. So, I needed something fun. The chain of events that led me here is completely random, but in the end, I stumbled upon LOLCODE! It’s so appealing, in a very weird way, and still…
The only problem with this language is its age – it’s almost 15 years old. That nearly qualifies as retro; there are things in museums that are just as old. The issue isn’t necessarily its age but the fact that the authors never fully standardized certain aspects of it. What I really need are arrays and file input/output capabilities. So here’s my attempt at the spec, and interpreter might follow soon ;)
Arrays
- Empty array declaration
I HAS A <array_name> ITZ A BUKKIT
- Array with initial values
I HAS A <array_name> ITZ A BUKKIT WIF <value1> AN <value2> AN <value3> MKAY
- Is array empty?
IZ <array_name> GHOST?
- Getting array size
HOW BIG IZ <array_name>
- Retrieving an item from array
I HAS A ma_value ITZ ma_numbers LOK 1
- Appending value at the end of array
YEET <value> LOK <array_name>
- Updating value at the index
YEET <value> INDA <array_name> LOK <index>
- Removing element at index
BAI <array_name> LOK <index>
-
Iterating over array
A label in the
IM IN YR <label>
for-loop defined in 1.2 version of the spec is not used, and I was tempted to re-use it, but at the end of the day, I think a dedicated loop fits here:
FORE4CH YR <item_variable> IN <array_name>
<statements>
KTH
And that’s it for now. In the mean time I am working on implementing basics of 1.2 spec using winnow and Rust:
fn parse_program_begin(input: &mut &str) -> Result {
seq!(Token::Hai{
_: "HAI",
_: space1,
major: dec_uint,
_: ".",
minor: dec_uint
})
.parse_next(input)
}
fn parse_program_end(input: &mut &str) -> Result {
"KTHXBAI".map(|_| Ok(Token::Kthxbai)).parse_next(input)?
}
Day 03
Slowly everything starts coming together. The ports turned out to be quite simple, and intuitive. They’re just a HANDLE
, an abstraction of input used by other read/write functions. The lexer is not bad either, follows similar pattern to cond
or match-string
.
(define-lex-abbrev CRLF (seq #\return #\newline))
(define-lex-abbrev NEWLINE (or CRLF #\return #\newline))
(define-lex-abbrev BACKSLASH-NEWLINE (seq #\\ NEWLINE))
(define-lex-abbrev ANY-NEWLINE (or BACKSLASH-NEWLINE NEWLINE))
(define-lex-abbrev ANY-CHAR (char-complement (union)))
(define lex
(lexer
[(eof) 'EOF]
[ANY-NEWLINE (lex input-port)]
[ANY-CHAR (display lexeme)]))
(define (run-lexer port)
(when (not (eq? 'eof (lex port))) (run-lexer port)))
(define (main)
(run-lexer (open-input-string "_\nFI\\\nLE\n__")))
(module+ main
(main))
now that I have this out of the way, I am going to do something I haven’t done in a long time - I am going to use the manual and implement the preprocessor according to the rules :)
Day 02
Not much time today, so didn’t really managed to write any code towards the preprocessor. I was also still having a kind of inner battle between heart (Scheme) and reason (Rust). Not sure if that’s a victory, but heart still wins :)
I’ve never really studied parsers. I think I am aware enough to appreciate the complexities and subtlety that goes into them, but throughout my career I only did very naive handmade parsers, then used Spirit quite a lot – and frankly loved it! – and then used some Parser Expression Grammars like nom or winnow. I must admit tho, that I find PEGs a little confusing… the first-match wins rule never won my heart. I think in backtracking terms…
Also, since I really only picked up Scheme like 2-3 days ago, every piece of documentation I read, opens another doors to yet another new concept. Like I tried to delay learning input-ports
and kept operating on strings as long as I could, but with lexer it was inevitable. So here I am, going down a rabbit hole… :)
Day 01
I have few projects in my head, I was trying to finish, who am I kidding, I was trying to even start them, for a long, long time. I am not going to write about the end goal yet, as I am not sure how long I am going to last, but for now, I want to focus on writing a C preprocessor. But also, I really wanna write it in Scheme. Or maybe I should rather say Racket. And to start well, here’s a little Advent of Code I did, my first Scheme in “production”:
(define (parse-input lines)
(for/fold ([left-list '()]
[right-list '()])
([line lines])
(let* ([parts (map string->number (regexp-split #px"\\s+" line))]
[left (first parts)]
[right (second parts)])
(values (cons left left-list) (cons right right-list)))))
(define (read-input filename)
(parse-input (file->lines filename)))
(define (calculate-total-distance left right)
(letrec ([calculate (lambda (l r a)
(cond
[(or (empty? l) (empty? r)) a]
[else (calculate (rest l) (rest r) (+ a (abs (- (first l) (first r)))))]))])
(calculate (sort left <) (sort right <) 0)))
(define (calculate-similarity-score left right)
(let ([frequency-map (make-hash)])
(for ([num right])
(hash-set! frequency-map num (+ (hash-ref frequency-map num 0) 1)))
(for/fold ([acc 0]) ([num left])
(+ acc (* num (hash-ref frequency-map num 0))))))
I already know it might be better, and I’ll polish it tomorrow. It’s kinda getting late, and I wanted to get this post out. G’nite :)
Comments
Discussion powered by , hop in. if you want.