Interesting, nanometre and smaller circuit potential science

Got something to discuss that's not about Blackmagic products? Then check out the Off-Topic forum!
  • Author
  • Message
Offline

Wayne Steven

  • Posts: 3362
  • Joined: Thu Aug 01, 2013 3:58 am
  • Location: Earth

Interesting, nanometre and smaller circuit potential science

PostWed Dec 04, 2019 3:58 pm

People around here maybe familiar that I work on the design of many things. I was interested in designing a display circuit technology which can directly form visible light waves through switching with a nano metre precision. So, I looked at the numbers needed to find out what minimum switching speed was needed. It turns out to be at least 299.792458 petahertz. I thought the 2 terahertz general circuit design I want to do is fast, that's insane, a petahertz in 1000 times faster than a terahertz (usually more than enough). But it also means, interestingly, that a single nanometre has the potential of movement of 299.792458 petahertz, which again is crazy. Of course, I know it's possible to do waves a lot easier through other techniques, but the figure is interesting. But what happens to atomic structure when you try to push nanometre waves at that sort of energy around? Space is surprisingly empty of photons, even in the midday sun, so the support structures needed for this can more easily fit in the gaps between the emitters. I had all sorts of interesting support design structures flitting through my head as I wrote this before. But, the 299.792458 petahertz figure, I think can at least be done through some fancy stuff I've been thinking of for years. The insane thing is memory addressing. When I did design proposals last decade for future processing technologies, a big problem was memory addressing. In my section of the design community, we like high-speed random access. Even saying that, a sequential segment scheme immediately formulates in mind. Sequential access is the normal solution for high speed memory access, where the next memory accesses are presumed to be the following memory, and the program, the whole circuit and work path, are designed to maximise speed of this access. So, a segmented access loads in the memory actually used, to further enhance this technique, this times a redirect to another part of memory, being presumed to be after this. But both the normal sequential, and segmented, solutions are not fast enough here. In my previous work, the proposal was to use a beam to read memory near by, but anything over half a nanometre away drops your return path speed of the beam below the 299.792458 petahertz figure, plus the reaction time (half a nanometre there and half one back plus reaction times on either end). 5 nanometres away is 29.9792458 petahertz or less, hundreds of bits+. Basically, even going to a tenth of a nanometre size, it's not enough for a good sized cache. So, I devised a scheme to maximise that, and random access. So, another solution pops into my head, in line memory. The circuit contains related memory along the program, or circuit, path. But the solution is custom circuits instead of programs, now everything is inline except when you don't need it. But then you go to the other solution I came up with, not just a binary circuits, but analogue circuits that may pass binary or integer values. Now the the 299.792458 petahertz figure, becomes very useful (relevant). You don't do much work in the nanometer or less range atomically, that's real brass tacs analogue for you, we are talking in reference to analogue below the easy convenient digital logic circuit element structures. But the cumulative effect of all this analogue circuitry, is as fast as you get in our plain of reference. Digital and FPGA, is like Lego bricks compared to this. But, I have been interested in forming sub atomic structures to reach lower dimensions and higher performance, even down to whatever is down with the plank length. Modern physics seems that be working in with previous theories, there is possibility to control the basic elements. The quarks themselves are many magnitudes bigger than the plank length, and things are strange in the standard model. We are yet to be able to see what is down that far (please note, they actively worry about the energy levels needed to see more things in colliders in the future being able to deflate the universe and destroy it. It is one of a handful of things in experimentation that they worry about being devastating). But there is a way to do it safely I've proposed. At those levels, things may become integer again, but also analogue to some extent (if it was all integer we probably wouldn't have an analogue universe). If you quickly look at it, you can see that if things are integer at the smallest scales, then scale itself must be analogue (hint) in order to derive an analogue structure on higher scales. However, both can be used together. Thinking again, the scale thing blends in with the theory, and it gets weird. In the handful of structural proposals, the scale thing fits in with the analogue universe proposal, where the universe may have no plank limit. This fits in with a proposal in relation to the existence paradox in science, from it's conventional viewpoint. If only we knew more.

If photons themselves are just packets of charge, if packets of different quantity of charge, then is charge just a quantity of some other element we don't see. I am not saying what, but suspect a certain thing. Now, things can be constructed at even smaller levels than the elements we know of. The processing speed of something the size of an atom could be more than all the computing power currently officially on the planet. But, there is a catch, matter is mostly empty, if you make something significant at such small size, what happens to gravity? So, such devices have to be walled off from affecting gravity. The other issue is, say you could build a circuit the size of an atom, that consumed as much mass as the Earth? That in itself is a high price to pay, let alone any gravitational effect. Let's say, it is not that feasible to build such densities, by better using the basic elements, just the mass of a little matter becomes a computing power house without too much hassle. The scientific concepts in this in terms of engineering is staggering. We are talking way beyond profound. We are talking about bending the laws of physics.

If anybody has seen the movie Supernova, and that strange object, we are talking that sort of level of effect on the small scale. It's funny how a number of science fiction things, which seemed like fantasy, turned out to be possible as the scientific possibilities were coincidentally seperately explored. As somebody said, science sufficiently advanced would appear to be magic to a more primitive culture.

Life is groovy, so to speak!
aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them
Offline

Wayne Steven

  • Posts: 3362
  • Joined: Thu Aug 01, 2013 3:58 am
  • Location: Earth

Re: Interesting, nanometre and smaller circuit potential sci

PostWed Dec 04, 2019 6:03 pm

I should explain a few things. The plank length is calculated Inna few constants, which without further proof, are mainly presumptions. So, to me the plank length itself is not necessarily definite. While I mention Quarks as the smallest partial in conventional physics, it is theorised that they are made of smaller particles, and that photons are made of quanta of charge. While we We merely cannot see deep enough. The other thing is, that people presume that feilds smaller than the plank length would go black holes (that sounds similar to my theory) but, how could we have the space foam/fields (you have to start questioning if all is made up of miniature black holes making the energy and mass of the universe spectacular, but somehow defying normal physics :) And if they are, and suppose to spontaneously exhaust themselves through hawking radiation, what radiation at that size? Maybe theu are points of normal charge stabilised by surrounding spots of dark charge, now I'm just being rude :) ) and I'm presuming the quata charge of the photons are at that limit.

Quarks, photons and electrons are also presumed to be point like objects, but scientists work out values of size for them. While quarks are presumed a few magnitudes smaller than electrons, electron properties and therefore size is not lock down, with experiential results and theories suggestion a few magnitudes smaller than Quarks, to around the plank length. So, I wonder of few things, are charge quanta in photons "sticky" giving their basic central point shape? However, the vector of travel potential (maybe if a structural element or differing type of photons) overcomes stickiness somewhat in close contact, or it is just the warping of the feild of space. Are photons sticking to electrons/orbits, this accumulation and emission. It does seem to align with my theory if it does. Then a photon can be thought of as a package of photons, with size possibly down to the lowest limits. A photon having a subset of properties of the quantum foam/structure of space, deforms it, producing the wave nature and a misconception of size. But is it deformation, or presence in space?

Now, I don't presume that something at the plank length has to have massive feild strength formimg black holes. I think that is problematic going on the above. However, I do presume there are other structures down their without the issues mentioned before. So, it's not that I don't know how these things work, it just I have a differing point of view. Like when I tell people that chemistry is the ultimate science, because toe chemistry doesn't end at the molecular level, it goes down to the very smallest thing, holistically. Atoms, other nonbasic particles, and even space, are just additional structures past mocules. So, it has to be asked: the interactions and what else can be constructed at these levels.
aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them
Offline

Wayne Steven

  • Posts: 3362
  • Joined: Thu Aug 01, 2013 3:58 am
  • Location: Earth

Re: Interesting, nanometre and smaller circuit potential sci

PostSun Dec 08, 2019 5:36 am

Single Large Photon.

One of the issues with photonic circuites, is how to control it. Photons are very fleeting, you can slow them down, and with some effort (previously) stop them. So, it is hard to get them to do conventional processing. So, you pump through photons and adjust the stream, but that wastes power and time. More primitivley, you get the photons to interact with physical materials to perform functions, which just slows things down. Some years ago I realised I could make such a scheme, where the design could be programmed at home, but would be maybe kilohertz, but nice presentation idea, but wanted to implement it inside a CD/DVD. Maybe MHz + could be achieved, but would interface with the computer through the optical drive laser/pickup to run the program directly in the disc. Using variations maybe it could work at hundreds of megahertz, and I think I was planning to use another optical technology to achieve faster. But, as with such design exploration, a number of things are either suboptimal, suboptimal in the future, or too costly. So, homemade optical computer simply ticked one box out of three there. Now, a lot of that stuff is not in the market, as direct optical disc based software is gone, unless you count game consoles. My last cheap optical computing design technology could easily match today's at up to maybe terahertz territory. Makes me wonder why I wanted to go into magnetics, but magnetics can run ultra low energy allowing deep 3D circuit stacking. Now rotating the elements of the design model around in my head, I realise there is a better way to do it, but maybe not redesignable. Just crossing a few other things into that there is a way to make that reprogrammable, but hard to do it dynamically, though I have another design that could be applied to that, but there are limitations. I demoted the optical design to B or C grade to prioritise better technologies.

But, it occured to me today that a way to get around lights fleeting nature, is to use single large photons, or single long photons, or single large photon packets. By using longer wavelengths you give longer length of interaction than a 1nm or less wave, moving the circuit tolerances into a cheap to manufacture realm, where even 3D printing (of a precise sort) could be used to manufacture optical components. Yep, seeing further advantages there. So, 350nm wave is plenty of atoms in length for interactions, but 2 micron plus allow far longer interaction length and far less tolerances with more transparency in post 650nm IR ranges optical technologies are more used to.

Investors?


Whoever woke me up the other day, please ring again.
aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them

Return to Off-Topic

Who is online

Users browsing this forum: No registered users and 42 guests