London’s 3DPrinting Trade Show >

Some of 3DPrinting’s possibilities will be on display at the UK’s first 3DPrinting trade show from Friday to next Sunday at The Brewery in central London . Clothes made using the technique will be exhibited in a live fashion show, which will include the unveiling of a hat designed for the event by the milliner Stephen Jones, and a band playing a specially composed score on 3DPrinted musical instruments.

3DPrinting is Star Trek science made reality, with the potential for production-line replacement body parts, aeronautical spares, fashion, furniture and virtually any other object on demand.

The cutting-edge technology, which layers plastic resin in a manner similar to an inkjet printer to create 3D objects, is on its way to becoming affordable for home use.

Some 2,000 consumers are expected to join 1,000 people from the burgeoning industry to see what the technique has to offer, including jewellery and art.

A 3D body scanner, which can reproduce a “mini” version of the person scanned, will also be on display. Workshops run by Jason Lopes of Legacy Effects, which provided 3DPrinted models and props for cinema blockbusters such as the Iron Man series and Snow White and the Huntsman, will add a sprinkling of Hollywood glamour. Kerry Hogarth, the woman behind 3DPrintshow, said yesterday she aims to showcase the potential of the technology for families.

Prices for printers start at around £1,o00 – DIY kits from around £500 – they will continue to drop steadily over the coming year.

Birmingham-based Black Country Atelier, will invite people to design a model vehicle and then see the result “printed” off for them to take home.

“We believe 3DPrinting needs to be seen to be believed,” Ms Hogarth said. “We hope that our show will give fashion students, makers, designers, artists, families and businesses the chance to see the different types of services, software and print technology available to them.”

3D Printshow runs from 19-21 October (

Amazing Homemade 3DPrinted Drone >

When Mitre Corporation, a McLean-based defense contractor announced that they were looking for summer interns, University of Virginia engineering student Steven Easter and his brother and lab partner, Jonathan Turman applied the job. They got the assignment: to build an unmanned aerial vehicle, using 3DPrinting technology.

Luckily they got support from Professor David Sheffler, a 20-year veteran in aerospace engineering. Between May and August the team has been working on designing and building a plane entirely from parts from a 3DPrinter.

The plane has a 2 metre wingspan and all the parts were printed in layers in plastic. During four test flights in August and early September at Milton Airfield near Keswick, the plane achieved a cruising speed of 70 kilomertres per hour.

There are seven 3DPrinters in the Engineering School’s Rapid Prototyping Lab. These 3DPrinters allows students to design, modify and print the parts until they get exactly what they want…

(The unmanned aerial vehicle, “dressed” in U.Va.’s colors.)

(Mechanical and aerospace engineering professor and project adviser David Sheffler, left, with the “printed” plane’s creators, Steven Easter, center, and Jonathan Turman. )

This is “the third 3DPrinted plane known to have been built and flown.” notes in UVA Today’s news. The technology also allows students to take on complex design projects that previously were impractical.

“To make a plastic turbofan engine to scale five years ago would have taken two years, at a cost of about $250,000,” Sheffler said. “But with 3DPrinting we designed and built it in four months for about $2,000. This opens up an arena of teaching that was not available before. It allows us to train engineers for the real challenges they will face in industry.”


The students work impressed Mitre Corp. representatives and Army officials, they got a new task – “to build an improved plane – lighter, stronger, faster and more easily assembled.”

Besides creating an attractive and operational unmanned airplane, this is also a valuable experience for the students. “The students sometimes put in 80-hour workweeks, with many long nights in the lab.”

“It was sort of a seat-of-the-pants thing at first – wham, bang,” Easter said. “But we kept banging away and became more confident as we kept designing and printing out new parts.”

Source: UVA Today

3DPrinted Customised Electronics…


‘Using printable electronics and rapid manufacturing processes, a more local consumer electronics industry is born. In this system, people select their electronic products online.’

There lay the basis  of the London Royal Collage of Art’s O.Update project by Hannes Harms, Alex du Preez and Peter Krige.

They conceive the increasingly likely premise that in the future, consumers will select their own electronic products online. They will browse through an online database of electronic products and customise the objects they wish to have. The database can then link them to their local store.

At that local outlet, technicians will assist the customer in manufacturing their unique products using 3DPrinting, laser cutting and acid etching. In this way, objects are only manufactured on demand – this system could localise electronics manufacturing and reduce electronic waste.

Products can constantly evolve through update cards. While an update is available a new printed electronic card is sent to the customers and the old electronic cards are sent back for re-manufacture or recycling.

Here is how they conceive their project, ‘O.Update,’ to materialise:
O.Update, Hannes Harms, Alex du Preez and Peter Krige

Visualising Data Using a 3DPrinter > > >


‘Some time ago, I had some data that lent themselves to a 3D surface plot. The problem was, the plot was quite asymmetrical, and finding the right viewing angle to see it effectively on a computer screen was extremely difficult. I spent ages tweaking angles and every possible view seemed to involve an unacceptable compromise.

Of course, displaying fundamentally 3D items in two dimensions is an ancient problem, as any cartographer will tell you. That night, as I lay thinking in bed, a solution presented itself… I had recently been reading about the work of a fellow University of Bath researcher, Adrian Bowyer, and his RepRap project, to produce an open-source 3DPrinter.

The solution was obvious: I had to find a way to print R data on one of these printers!

I managed to meet up with Adrian back in May 2012, and he explained to me the structure of the STL (stereolithography) files commonly used for three-dimensional printing. These describe an object as a large series of triangles. I decided I’d have a go at writing R code to produce valid STL files.

I’m normally a terrible hacker when it comes to programming; I usually storm in and try to make things work as quickly as possible then fix all the mistakes later. This time, I was much more methodical. As a little lesson to us all, the methodical approach worked: I had the core code producing valid STL files in under 3 hours.

Unfortunately, it then took until September 2012 before I could get hold of somebody with a 3DPrinter who’d let me test my code. A few days ago the first prototype was produced:


So now I’d like to share the code under a Creative Commons BY-NC-SA licence, in case anybody else finds it useful. You can download the code here, in a file called r2stl.r.

One day, when I learn how, I might try to make this a library, but for now you can just call this code with R’s source()command. All that is in the file is the function r2stl(), and having once called the file withsource(), you can then use the r2stl function to generate your STL files. The command is:

r2stl(x, y, z, filename='3d-R-object.stl','r2stl-object', z.expand=FALSE, min.height=0.008, show.persp=FALSE, strict.stl=FALSE)

    • xy and z should be vectors of numbers, exactly as with R’s normal persp() plot. x and y represent a flat grid and z represents heights above this grid.
    • filename is  self-explanitory.
    • The STL file format requires the object that is being described to have a name specified inside the file. It’s unlikely anybody will ever see this, so there’s probably no point changing it from the default.
    • z.expand By default, r2stl() normalizes each axis so it runs from 0 to 1 (this is an attempt to give you an object that is agnostic with regard to how large it will eventually be printed). Normally, the code then rescales the z axis back down so its proportions relative to x and y are what they were originally. If, for some reason, you want your 3D plot to touch all six faces of the imaginary cube that surrounds it, set this parameter to TRUE.
    • min.height Your printed model would fall apart if some parts of it had z values of zero, as this would mean zero material is laid down in those parts of the plot. This parameter therefore provides a minimum height for the printed material. The default of 0.008 ensures that, when printed, no part of your object is thinner than around 0.5 mm, assuming that it is printed inside a 60 mm x 60 mm x 60 mm cube. Recall that the z axis gets scaled from 0 to 1. If you are printing a 60mm-tall object then a z-value of 1 represents 60mm. The formula is, so if we want a minimum printed thickness of 0.5mm and the overall height of your object will be 60mm, 0.5/60 = 0.008, which is the default. If you want the same minimum printed thickness of 0.5mm but want to print your object to 100mm, this parameter would be set to 0.5/100 = 0.005
    • show.persp Do you want to see a persp() plot of this object on your screen as the STL is being generated? Default is FALSE.
    • strict.stl To make files smaller, this code cheats and simply describes the entire rectangular base of your object as two huge triangles. This seems to work fine for printing, but isn’t strictly proper STL format. Set this to TRUE if you want the base of your object described as a large number of triangles and don’t mind larger files.

To view and test your STL files before you print them, you can use various programs. I have had good experiences with the free, opensource Meshlab.

Even if all you ever do is show people your 3D plots using Meshlab, I believe r2stl() still offers a useful service, as it makes viewing data far more interactive than static persp() plots.



# Let’s do the classic persp() demo plot, as shown in the photograph above

x <- seq(-10, 10, length= 100)

y <- x

f <- function(x,y) { r <- sqrt(x^2+y^2); 10 * sin(r)/r }

z <- outer(x, y, f)

z[] <- 1

r2stl(x, y, z, filename=”lovelyfunction.stl”, show.persp=TRUE)

# Now let’s look at R’s Volcano data

z <- volcano

x <- 1:dim(volcano)[1]

y <- 1:dim(volcano)[2]

r2stl(x, y, z, filename=”volcano.stl”, show.persp=TRUE)

I hope you might find this code useful. ‘

– Ian Walker, Department of Psychology, University of Bath.

Pulling A Rabbit Out Of A Printer > > >

Liz Neely, Director of Digital Information & Access at the Art Institute of Chicago, had been one of those experimenting with 3D Printing and 3D Scanning. Here is a Q&A session between she and Seb Chan of Fresh and New:

Q – What has Art Institute of Chicago been doing in terms of 3D digitisation? Did you have something in play before the Met jumped the gun?

At the Art Institute before #Met3D, we had been experimenting with different image display techniques to meet the needs of our OSCI scholarly catalogues and the Gallery Connections iPad project. The first OSCI catalogues focus on the Impressionist painting collections, and therefore the image tools center on hyper-zooming to view brushstrokes, technical image layering, and vector annotations.

Because the Gallery Connections iPads focus on our European Decorative Arts (EDA), a 3Dimensional collection, our approach to photography has been decidedly different and revolves around providing access to these artworks beyond what can be experienced in the gallery. To this end we captured new 360-degree photography of objects, performed image manipulations to illustrate narratives and engaged a 3D animator to bring select objects to life.

For the 3D animations on the iPads, we required an exactitude and artistry to the renders to highlight the true richness of the original artworks. Rhys Bevan meticulously modelled and ‘skinned’ the renders using the high-end 3D software, Maya.

We often included the gray un-skinned wireframe models in presentations, because the animations were so true it was hard to communicate the fact that they were models. These beautiful 3D animations allow us to show the artworks in motion, such as the construction of the Model Chalice, an object meant to be deconstructed for travel in the 19th century.

These projects piqued my interest in 3D, so I signed up for a Maya class at SAIC, and, boy, it really wasn’t for me. Surprisingly, building immersive environments in the computer really bored me. Meanwhile, the emerging DIY scanning/printing/sharing community focused on a tactile outcome spoke more to me as a ‘maker’. This is closely aligned with my attraction to Arduino — a desire to bring the digital world into closer dialogue with our physical existence.

All this interest aside, I hadn’t planned anything for the Art Institute.

Mad props go out to our friends at the Met who accelerated the 3D game with the #Met3D hackathon. Tweets and blogs coming out of the hackathon-motivated action. It was time for all of us to step up and get the party started!

Despite my animated—wild jazz hands waving—enthusiasm for #Met3D, the idea still seemed too abstract to inspire a contagious reaction from my colleagues.

We needed to bring 3D printing to the Art Institute, experience it, and talk about it. My friend, artist and SAIC instructor Tom Burtonwood, had attended #Met3D and was all over the idea of getting 3D going at the Art Institute.

On July 19th, Tom and Mike Moceri arrived at the Art Institute dock in a shiny black SUV with a BATMAN license plate and a trunk packed with a couple Makerbots.

Our event was different from #Met3D in that we focused on allowing staff to experience 3D scanning and printing first hand. We began the day using iPads and 123D Catch to scan artworks. In the afternoon, the two Makerbots started printing in our Ryan Education Center and Mike demonstrated modelling techniques, including some examples using a Microsoft Kinect.

Colleagues began dialoging about a broad range of usages for education programs, creative re-mixing of the collection, exhibition layout planning, assisting the sight impaired and prototyping artwork installation.

Q – Your recent scan of the Rabbit Tureen used a different method. You just used existing 2D photos, right? How did that work?

In testing image uploads onto the Gallery Connections iPad app, this particular Rabbit Tureen hypnotised me with its giant staring eye.

Many EDA objects have decoration on all sides, so we prioritised imaging much of work from 72 angles to provide the visual illusion of a 360 degree view like quickly paging through a flip book.

It occurred to me that since we had 360 photography, we might be able to mold that photography into a 3D model. This idea is particularly exciting because we could be setting ourselves up to amass an archive of 3DPrintable models through the museum’s normal course of 2D sculptural and decorative arts photography.

This hypothesis weighed on my thoughts such that I snuck back into the office over Labour Day weekend to grab the full set of 72 image files. Eureka! I loaded the files into 123D Catch and it created a near perfect 3D render.

By ‘near perfect’, I mean that the model only had one small hole and didn’t have any obvious deformities. With much Twitter guidance from Tom Burtonwood, I pulled the Catch model into Meshmaker to repair the hole and fill in the base. Voila-we had a printable bunny!

The theory had been proven: with minimal effort while making our 360 images on the photography turntable, we are creating the building blocks for a 3DPrintable archive!

Q – What do you think are the emerging opportunities in 3D digitisation?

There are multitudes of opportunities for 3D scanning and printing with the most obvious being in education and collections access.

To get a good 3D scan of sculpture and other objects without gaping holes, the photographer must really look at the artwork, think about the angles, consider the shadows and capture all the important details.

This is just the kind of thought and ‘close looking’ we want to encourage in the museum. I’ve followed with great interest the use of 3D modelling in the Conservation Imaging Project led by Dale Kronkright at the Georgia O’Keeffe museum.

Q – Is 3D the next level for the Online Scholarly Catalogues Initiative?

A group of us work collaboratively with authors on each of our catalogues to determine which interactive technologies or resources are most appropriate to support the catalogue. We’re currently kicking off 360 degree imaging for our online scholarly Roman catalogue. In these scholarly catalogues, we would enforce a much higher bar of accuracy and review than the DIY rapid prototyping we’re doing in 123D Catch. It’s very possible we could provide 3D models with the catalogues, but we’ll have to address a deeper level of questions and likely engage a modelling expert as we have for the Gallery Connections iPad project.

More immediately, we can think of other access points to these printable models even if we cannot guarantee perfection. For example, I’ve started attaching Thing records to online collection records with associated disclaimers about accuracy. We strive to develop an ecosystem of access to linked resources authored and/or indexed for each publication and audience.

Q – Has anyone from your retail/shop operations participated? What do they think about this ‘object making’?

Like a traveling salesman I show up at every meeting with 2 or 3 printed replicas and an iPad with pictures and videos of all our current projects. At one meeting where I had an impromptu show and tell of the printed Art Institute lion, staff from our marketing team prompted a discussion about the feasibility of creating take-home DIY mold-a-ramas! It was decided that for now, the elongated print time is still a barrier to satisfying a rushed crowd. But in structured programs, we can design around these constraints.

At the Art Institute, 3D scanning and printing remains, for now, a grass-roots enthusiasm of a small set of colleagues. I’m excited by how many ideas have already surfaced, but am certain that even more innovations will emerge as it becomes more mainstream at the museum.

Q – I know you’re a keen Arduino boffin too. What contraptions do you next want to make using both 3DPrinting and Arduino? Will we be seeing any at MCN?

This should be interesting since MCN will kick off with a combined 3DPrinting and Arduino workshop co-led by the Met’s Don Undeen and Miriam Langer from the New Mexico Highlands University. We will surely see some wonderfully creative chaos, which will build throughout the conference.

These workshops may seem a bit abstract at first glance from the daily work we do. I encourage everyone to embrace a maker project or workshop even if you can’t specifically pinpoint its relevance to your current projects. Getting your hands dirty in a creative project can bring and innovative mindset to e-publication, digital media and other engagement projects.

Sadly I won’t have time before MCN to produce an elaborate Arduino-driven Makerbot masterpiece. I’m currently dedicating my ‘project time’ to an overly ambitious installation artwork that incorporates Kinect, Arduino, Processing, servos, lights and sounds to address issues of balance…’

Adapted from an article by Seb Chan

KamerMaker: Game Changer? > > >

A massive mobile 3DPrinter to print architecture on demand… the stuff of science fiction again becomes reality with 3Dprinting.

While in one direction 3DPrinters, from heom desktop prtiners to nanoscale lab machines, are printing ever smaller objects, in ever finer details, things are moving at the other end of the scale as well.

We’ve seen concepts of monolithic printers that can 3DPrint entire homes, but, currently,, they appear to be for the future. However, in the now, we are seeing real printers getting larger, and DUS, a Dutch architecture firm, has produced a printer prototype large enough to print  structures that can actually shelter people!

The KamerMaker, and is based upon DUS’s normal-sized 3DPrinter, the Ultimaker, with a print range increased to huge 2.2m x 2.2m x 3.5m!

The unit is mobile, a traveling pavilion, where on-demand architecture can respond to local needs. Think of such a printer home modules on-site, to provide permanent or temporary housing, perhaps even using recycled plastic. Here’s DUS’s amazing video:

<p><a href=”″>KamerMaker</a&gt; from <a href=”″>DUS Architects</a> on <a href=””>Vimeo</a&gt;.</p>


NASA’s Self-replicating Spaceships > > >

NASA investing in self-building spaceship research

NASA has invested $100,000 in SpiderFab, a firm that is looking at the feasibility of launching a 3DPrinter into space for stellar self-constructing stations and  potentially massive ships with a greater degree of complexity than has currently been possible.

The benefit of such an approach is that there would be no need to worry about designing the ship to withstand lift-off, which massively complicates the current design processes and greatly increases the cost, nor requiring it to fold up into the confines of lift-off vehicles.
NASA believes the conceptual quantum leap of self-assembling ships could seek raw materials for assembly and thus self-repair, in space, from asteroids, or even recycle broken satellites in orbit.
The technology, as it developes, could lead to self-constructing satellites and telescopes, interstellar spacecraft, and much larger space stations.
2D-Printers on Earth are printing science-fiction -soon 3DPrinters in space will be printing the real thing.