mdude: Yellow and green ink blot style image. (Default)
Oh hey, someone let me know I ought to consider describing what I made in https://mdude.dreamwidth.org/1762.html in a way that actually makes sense instead being mostly incoherent rambling about my thought process.

So, the the main idea of how the function acts as analog version of the Fredkin/controlled swap gate: It takes some sequence of N values, and swaps them around by X places. If X is a whole number, the values just get moved that many places over, wrapping around. So in such cases, the output for function(X) can equal to Input[N+X]. But when it's not, we need to have some way of producing outputs that lie between the outputs for Input[ceiling(X)] and Input[floor(X)].

So, easiest way to do that would be to make the output for function(X) equal:
(Input[floor(X)]*(1-(fractional part of X)))+(Input[ceiling(x)*(fractional part of X))

The only reason I was relying so much on max and min was because I know those are easy to implement in mechanical analog computers, by just letting either input push or pull the output so that the output pushes/pulls as much as whichever input is doing that the most. While I'm at it, here's a version that ditches the modulo function at the cost of only being accurate over a limited range of input: https://www.desmos.com/calculator/or2gurny0c

Edit: I should probably point out that I later realized I'm still doing this wrong, so I'll need to keep working on it. For some reason it just seems slightly too hard to keep all the parts I'm working with in my head at once while translating it from geometry to code. I think it might work if I take some earlier ideas I was trying and go "something something Viviani's theorem" at it, but that'll wait until I feel like getting back to it.
mdude: Yellow and green ink blot style image. (Default)
To continue writing down ideas I ought to have written: I like the idea of making computer parts from scratch, impracticable as it may be for producing a machine which meets contemporaneous performance expectations. I like the idea of reversible computing. As well, I also enjoy playing with the idea of analog computing. These things entwine together in a series of thoughts that begin with observation the observation that a Fredkin gate, in addition to being reversible, outputs the same number of each value as it receives as input.

This means that even if the two values are represented by a phenomenon which embodies different energy levels, the output can embody the same amount of energy as the input. Further, it can do so without the output using different energy levels for each value than the input! If there were a continuous version of this, I figured it'd be pretty handy for designing analog computers.

The reason I thought this was that I was already looking into ways to make reversible analog computers, as I wanted to see if the reversibility would result in substantially reduced noise in the system and thus a higher precision of results for a given level of quality of components. That in turn just seems generally useful for making computers from scratch.

So, the first thought I had was that what I'm doing would have something to do with affine transformations, as earlier research and discussion called my attention to the fact that affine transformations are both reversible and continuous functions. Specifically, rotation seemed like a good option, since if I'm taking the two inputs to be swapped as the X and Y coordinates, and the controlling input as rotation, then rotating 90 degrees should do something like swap them? More like invert them both, really.

At the moment, I was more interested in getting around the part where I don't want a circular arc, since that clearly wouldn't preserve the part where the sum of the outputs match the sum of the inputs. So I tried looking a thing where, before rotation, perform a transformation where I take the Chebyshev distance of the point and turn it into a point with the same angle from the origin but with Euclidean distance, and reverse the transformation after.

This ended up making the function for the X/Y coordinate inputs over the rotation input equivalent to what I'll call trapezoid waves, since that's what they look like, and a simpler way to implement the result is to just make a triangle wave and then use min and max functions to clip off the distance. However, I notice then that this still doesn't actually conserve the sum-invariance property I wanted! So then I thought, hey, if I rotate it 45 degrees normally first, then just do a the triangle waves without clipping them, I'm now going around the edge of a Manhattan distance circle instead of a Chebyshev distance one!

So that actually worked, but only if I presume the signal values can be negative, which might not be true of whatever computing media I go with. Otherwise I'm stuck in the upper right corner. I guess I could just use a different offset for the triangle waves so the function for any given X/Y input just covers the diagonal line where X+Y is constant. Maybe that actually would work? Something about it just didn't feel right to me, though.

So, I decided to make a function that takes X,Y, and Z inputs and well as the control/rotation, and does a function that for each input is just a triangle wave clipped at the bottom, but for all three of them ends up tracing the perimeter of a triangular wafer cut from a cube. So there we go! Maybe some time I'll try using it for a thing. Also, here it is on a web-based graphing calculator: https://www.desmos.com/calculator/rslh0cvwrk

Edit: Just realized I could generalize this to rotating arbitrary points by having each input other than the control ran through the function alongside its two neighboring inputs. Though also, it seems that I'm wrong about this version of the function doing what I want at all, as actually checking (I1+I2+I3)-(O1+O2+O3) shows that it in fact not produce a function f(r) that only produced an output of zero? So I'll need to keep working on this I guess.

Edit2: Oh, nice, I managed to fix the thing I was making! Working cube slice gate function: https://www.desmos.com/calculator/15wnxj923y
mdude: Yellow and green ink blot style image. (Default)
Alright, so: Let's start by taking the equation for the Bekenstein-Hawking Boundary Entropy/Bekenstein–Hawking black hole entropy, which happens to saturate the Bekenstein bound. That supposedly has some implications that lead to people wondering about holographic universes, though while the universe having a possibly-infinite tree of nested black holes, the boundaries of which represent space time regions, sounds pretty interesting in and of itself, that stuff goes over my head a bit and it's not what I'm going to focus on in this entry anyway.

I'm just going to play around with a model universe where the Bekenstein Bound is instead the Bekenstein Equation and see what I get from that! Also, I'm going to be using Planck units, partly for simplicity in general, but also because the Bekenstein bound (as presented in the Wikipedia article, anyway) doesn't seem to prescribe a particular unit of entropy/information for determining, say, how many bits of information you get by multiplying a plank energy times a sphere of plank length radius, just a general proportion.

So yeah, we doing dumb unit conversion with a presumption of a Bekenstein Equation relation of [entropy]=[area]/4=4*[Pi]*[mass]2. Taking just [area]/4=4*[Pi]*[mass]2. Now let's look at vacuum energy! Let's ignore its actual value for now, but just look at how it's in the form of [energy]/[length]3. Now, area is [length]2, so we can rewrite the preceding space-mass equation as [length]2/4=4*[Pi]*[mass]2. Further, in Planck units, the mass energy equation is just [energy]=[mass], so we can further write it as [length]2/4=4*[Pi]*[energy]2.

So, let's simplify for length first. Multiply both sides of [length]2/4=4*[Pi]*[energy]2 by four, get [length]2=16*[Pi]*[energy]2. Now take the square root, of both, so you have [length]=sqrt(16*[Pi]*[energy]2). I'm not entirely sure, but if I understand the communitivities involved correctly, this means we get [length]= Plugging in 4*sqrt([Pi])*[energy] as a replacement for length in the [energy]/[length]3 unit value, we have [energy]/(4*sqrt([Pi])*[energy])3. Checking this expression simplifier, it seems that x/x3=x-2. If I'm again understanding communitvitiy right, we can change this to be [energy]/(4*sqrt([Pi]))3*[energy]3, and then to 1/64*[Pi]3/2*[energy]-2.

Trusting the expression simplifier again, it says 1/64*X^(3/2)*Y^-2 evaluates to (1/64)X3/2Y-2, therefore 1/64*[Pi]3/2*[energy]-2 evaluates to (1/64)Pi3/2Y-2. Actually, I should see about just plugging in some stuff from earlier to see if it matches up. The expression simplifier says that (16*X*Y^2)^0.5=4YX0.5, which looks much nicer than what I was working with before, and Y/(16*X*Y^2)^3 to... oh dear... (1/4096)Y-5X-3?

Well, I was going to go ahead and do a similar thing for the other side by simplifying and substituting energy, and then using that and the fact that both resulting values equal [energy]/[length]3 to arrive at either the same energy/mass to area equation as before, or to a different one that had maybe a different balance of exponents for length or something. Now, though, I'm not really sure what to do to make sure I'm doing my math consistently. I guess either check with someone else, or try finding some math analysis software that can help me keep track of what I'm doing better?
mdude: Yellow and green ink blot style image. (Default)
So, I was thinking about how in 3D modeling, if you're just mucking about in a program with little to no direction, one problem you might run into is dissolving some vertex, point, or face, and end up with an object that has poorly formed or connected faces. I was wondering if maybe there could be a mode of 3D editing that was a bit more resilient against that. At the same time I was wondering if it'd be a fun thing to play with a silly rendering system that drew things geometrically instead of breaking things down into triangles even though there's plenty of reason no current 3D rendered does that.

Then I considered that if everything is being rendered as a collection of triangles, maybe everything could be represented as a collection of tetrahedrons? Like maybe any 3D solid one might make in a rendered could be topologically identical to the outer boundary of some fully collected set of cells within a tetrahedral honeycomb? I'm not entirely sure, but either way I think a 3D modeler where you connect tetrahedral cells and distort them to correct proportions via continuous map functions could be pretty fun to play with and be good for sketching out low-poly designs. Guess that'll be something to work on sometime.

Edit(Sept. 12, 2017, 12:03PM): Thinking about it later, I thought of a probelm that might come up with doing this, but then I forgot it. So yeah, I'll see if it's fun to play with anyway and keep trying to think of ways to just kind of toss data at a renderer and somehow tend to get shapes that avoid clipping with themselves without really trying just because of hwo the math works out.

Profile

mdude: Yellow and green ink blot style image. (Default)
Meticulac

March 2019

S M T W T F S
     12
34567 89
10 111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 21st, 2026 02:24 pm
Powered by Dreamwidth Studios