3DPrinted Gun Pt4: What Now? >

Click the image to view the full Stratasys letter to Defense Distributed

In a now ongoing saga Defense Distributed, a group of pro-gun lobbyists with the idea to 3DPrint a live firearm, has had its 3DPrinter seized by the machine’s manufacturer Stratasys.

Comments on the Wiki Weapons story so far have condemned the notion of a 3DPrinted gun – one that would enable anyone, anywhere, to manufacture their own weapon – with negative feedback, and now congratulatory remarks applauding Stratasys. Stratasys informed develop3d.com of their official line on the episode:

“Stratasys reserves the right to reject an order. Members of Defense Distributed, like any U.S. citizens, are able to follow the well-established federal and state regulations to manufacture, distribute or procure a firearm in [the U.S.A.].”

Matter resolved? Responsible company stepping in and doing the safe, legal and proper deed, it could perhaps be concluded as such. But, for example, a 3DPrinting professional visiting a school in South London, U.K., to show students 3DPrinting asked them what they could imagine printing for themselves… a student replied:

“Knives.”

Whilst some students may be intrigued by innovative cutlery design, and schoolboy bravado regarding an interest in weapons/ fast cars/ protein suppliments, we will inevitably face the forthcoming legislative backlash regarding the concern that if anyone can download a file to manufacture a weapon, and the technology continues to progress, ‘press to print products’ will degrade into a home 3DPrinting black market.

How to stop 3DPrinted home weaponary proliferation? Restricting C.A.D. files of weapons from appearing online seems to obvious and popular suggestion – although this simply leads to the difficultly in policing the internet.

The limitations of most available 3DPrinters, materials and processes, mean an readily accessible 3DPrinted threat to humanity is certainly not here yet: but as the inevitability of the wave of concern now seems set, so does the non-rhetorical that 3DPrint makers, bloggers and journalists need to pose to their audience:

 

“What should we do about this?”

 

3DPrinted Gun Pt1: Control Debate >
3DPrinted Gun Pt2: Campaign Stopped >
3DPrinted Gun Pt3: Seized >
3DPrinted Gun Pt4: What Now? >

Advertisements

Visualising Data Using a 3DPrinter > > >

SPECIALIST KNOWLEDGE LEVEL > > > > >5/5

‘Some time ago, I had some data that lent themselves to a 3D surface plot. The problem was, the plot was quite asymmetrical, and finding the right viewing angle to see it effectively on a computer screen was extremely difficult. I spent ages tweaking angles and every possible view seemed to involve an unacceptable compromise.

Of course, displaying fundamentally 3D items in two dimensions is an ancient problem, as any cartographer will tell you. That night, as I lay thinking in bed, a solution presented itself… I had recently been reading about the work of a fellow University of Bath researcher, Adrian Bowyer, and his RepRap project, to produce an open-source 3DPrinter.

The solution was obvious: I had to find a way to print R data on one of these printers!

I managed to meet up with Adrian back in May 2012, and he explained to me the structure of the STL (stereolithography) files commonly used for three-dimensional printing. These describe an object as a large series of triangles. I decided I’d have a go at writing R code to produce valid STL files.

I’m normally a terrible hacker when it comes to programming; I usually storm in and try to make things work as quickly as possible then fix all the mistakes later. This time, I was much more methodical. As a little lesson to us all, the methodical approach worked: I had the core code producing valid STL files in under 3 hours.

Unfortunately, it then took until September 2012 before I could get hold of somebody with a 3DPrinter who’d let me test my code. A few days ago the first prototype was produced:

3dfunctionr.jpg

So now I’d like to share the code under a Creative Commons BY-NC-SA licence, in case anybody else finds it useful. You can download the code here, in a file called r2stl.r.

One day, when I learn how, I might try to make this a library, but for now you can just call this code with R’s source()command. All that is in the file is the function r2stl(), and having once called the file withsource(), you can then use the r2stl function to generate your STL files. The command is:

r2stl(x, y, z, filename='3d-R-object.stl', object.name='r2stl-object', z.expand=FALSE, min.height=0.008, show.persp=FALSE, strict.stl=FALSE)

    • xy and z should be vectors of numbers, exactly as with R’s normal persp() plot. x and y represent a flat grid and z represents heights above this grid.
    • filename is  self-explanitory.
    • object.name The STL file format requires the object that is being described to have a name specified inside the file. It’s unlikely anybody will ever see this, so there’s probably no point changing it from the default.
    • z.expand By default, r2stl() normalizes each axis so it runs from 0 to 1 (this is an attempt to give you an object that is agnostic with regard to how large it will eventually be printed). Normally, the code then rescales the z axis back down so its proportions relative to x and y are what they were originally. If, for some reason, you want your 3D plot to touch all six faces of the imaginary cube that surrounds it, set this parameter to TRUE.
    • min.height Your printed model would fall apart if some parts of it had z values of zero, as this would mean zero material is laid down in those parts of the plot. This parameter therefore provides a minimum height for the printed material. The default of 0.008 ensures that, when printed, no part of your object is thinner than around 0.5 mm, assuming that it is printed inside a 60 mm x 60 mm x 60 mm cube. Recall that the z axis gets scaled from 0 to 1. If you are printing a 60mm-tall object then a z-value of 1 represents 60mm. The formula is min.height=min.mm/overall.mm, so if we want a minimum printed thickness of 0.5mm and the overall height of your object will be 60mm, 0.5/60 = 0.008, which is the default. If you want the same minimum printed thickness of 0.5mm but want to print your object to 100mm, this parameter would be set to 0.5/100 = 0.005
    • show.persp Do you want to see a persp() plot of this object on your screen as the STL is being generated? Default is FALSE.
    • strict.stl To make files smaller, this code cheats and simply describes the entire rectangular base of your object as two huge triangles. This seems to work fine for printing, but isn’t strictly proper STL format. Set this to TRUE if you want the base of your object described as a large number of triangles and don’t mind larger files.

To view and test your STL files before you print them, you can use various programs. I have had good experiences with the free, opensource Meshlab.

Even if all you ever do is show people your 3D plots using Meshlab, I believe r2stl() still offers a useful service, as it makes viewing data far more interactive than static persp() plots.

Demo

source('r2stl.r')

# Let’s do the classic persp() demo plot, as shown in the photograph above

x <- seq(-10, 10, length= 100)

y <- x

f <- function(x,y) { r <- sqrt(x^2+y^2); 10 * sin(r)/r }

z <- outer(x, y, f)

z[is.na(z)] <- 1

r2stl(x, y, z, filename=”lovelyfunction.stl”, show.persp=TRUE)

# Now let’s look at R’s Volcano data

z <- volcano

x <- 1:dim(volcano)[1]

y <- 1:dim(volcano)[2]

r2stl(x, y, z, filename=”volcano.stl”, show.persp=TRUE)

I hope you might find this code useful. ‘

– Ian Walker, Department of Psychology, University of Bath.

http://psychologicalstatistics.blogspot.co.uk/2012/09/guest-post-visualizing-data-using-3d.html

Pulling A Rabbit Out Of A Printer > > >

Liz Neely, Director of Digital Information & Access at the Art Institute of Chicago, had been one of those experimenting with 3D Printing and 3D Scanning. Here is a Q&A session between she and Seb Chan of Fresh and New:

Q – What has Art Institute of Chicago been doing in terms of 3D digitisation? Did you have something in play before the Met jumped the gun?

At the Art Institute before #Met3D, we had been experimenting with different image display techniques to meet the needs of our OSCI scholarly catalogues and the Gallery Connections iPad project. The first OSCI catalogues focus on the Impressionist painting collections, and therefore the image tools center on hyper-zooming to view brushstrokes, technical image layering, and vector annotations.

Because the Gallery Connections iPads focus on our European Decorative Arts (EDA), a 3Dimensional collection, our approach to photography has been decidedly different and revolves around providing access to these artworks beyond what can be experienced in the gallery. To this end we captured new 360-degree photography of objects, performed image manipulations to illustrate narratives and engaged a 3D animator to bring select objects to life.

For the 3D animations on the iPads, we required an exactitude and artistry to the renders to highlight the true richness of the original artworks. Rhys Bevan meticulously modelled and ‘skinned’ the renders using the high-end 3D software, Maya.

We often included the gray un-skinned wireframe models in presentations, because the animations were so true it was hard to communicate the fact that they were models. These beautiful 3D animations allow us to show the artworks in motion, such as the construction of the Model Chalice, an object meant to be deconstructed for travel in the 19th century.

These projects piqued my interest in 3D, so I signed up for a Maya class at SAIC, and, boy, it really wasn’t for me. Surprisingly, building immersive environments in the computer really bored me. Meanwhile, the emerging DIY scanning/printing/sharing community focused on a tactile outcome spoke more to me as a ‘maker’. This is closely aligned with my attraction to Arduino — a desire to bring the digital world into closer dialogue with our physical existence.

All this interest aside, I hadn’t planned anything for the Art Institute.

Mad props go out to our friends at the Met who accelerated the 3D game with the #Met3D hackathon. Tweets and blogs coming out of the hackathon-motivated action. It was time for all of us to step up and get the party started!

Despite my animated—wild jazz hands waving—enthusiasm for #Met3D, the idea still seemed too abstract to inspire a contagious reaction from my colleagues.

We needed to bring 3D printing to the Art Institute, experience it, and talk about it. My friend, artist and SAIC instructor Tom Burtonwood, had attended #Met3D and was all over the idea of getting 3D going at the Art Institute.

On July 19th, Tom and Mike Moceri arrived at the Art Institute dock in a shiny black SUV with a BATMAN license plate and a trunk packed with a couple Makerbots.

Our event was different from #Met3D in that we focused on allowing staff to experience 3D scanning and printing first hand. We began the day using iPads and 123D Catch to scan artworks. In the afternoon, the two Makerbots started printing in our Ryan Education Center and Mike demonstrated modelling techniques, including some examples using a Microsoft Kinect.

Colleagues began dialoging about a broad range of usages for education programs, creative re-mixing of the collection, exhibition layout planning, assisting the sight impaired and prototyping artwork installation.

Q – Your recent scan of the Rabbit Tureen used a different method. You just used existing 2D photos, right? How did that work?

In testing image uploads onto the Gallery Connections iPad app, this particular Rabbit Tureen hypnotised me with its giant staring eye.

Many EDA objects have decoration on all sides, so we prioritised imaging much of work from 72 angles to provide the visual illusion of a 360 degree view like quickly paging through a flip book.

It occurred to me that since we had 360 photography, we might be able to mold that photography into a 3D model. This idea is particularly exciting because we could be setting ourselves up to amass an archive of 3DPrintable models through the museum’s normal course of 2D sculptural and decorative arts photography.

This hypothesis weighed on my thoughts such that I snuck back into the office over Labour Day weekend to grab the full set of 72 image files. Eureka! I loaded the files into 123D Catch and it created a near perfect 3D render.

By ‘near perfect’, I mean that the model only had one small hole and didn’t have any obvious deformities. With much Twitter guidance from Tom Burtonwood, I pulled the Catch model into Meshmaker to repair the hole and fill in the base. Voila-we had a printable bunny!

The theory had been proven: with minimal effort while making our 360 images on the photography turntable, we are creating the building blocks for a 3DPrintable archive!

Q – What do you think are the emerging opportunities in 3D digitisation?

There are multitudes of opportunities for 3D scanning and printing with the most obvious being in education and collections access.

To get a good 3D scan of sculpture and other objects without gaping holes, the photographer must really look at the artwork, think about the angles, consider the shadows and capture all the important details.

This is just the kind of thought and ‘close looking’ we want to encourage in the museum. I’ve followed with great interest the use of 3D modelling in the Conservation Imaging Project led by Dale Kronkright at the Georgia O’Keeffe museum.

Q – Is 3D the next level for the Online Scholarly Catalogues Initiative?

A group of us work collaboratively with authors on each of our catalogues to determine which interactive technologies or resources are most appropriate to support the catalogue. We’re currently kicking off 360 degree imaging for our online scholarly Roman catalogue. In these scholarly catalogues, we would enforce a much higher bar of accuracy and review than the DIY rapid prototyping we’re doing in 123D Catch. It’s very possible we could provide 3D models with the catalogues, but we’ll have to address a deeper level of questions and likely engage a modelling expert as we have for the Gallery Connections iPad project.

More immediately, we can think of other access points to these printable models even if we cannot guarantee perfection. For example, I’ve started attaching Thing records to online collection records with associated disclaimers about accuracy. We strive to develop an ecosystem of access to linked resources authored and/or indexed for each publication and audience.

Q – Has anyone from your retail/shop operations participated? What do they think about this ‘object making’?

Like a traveling salesman I show up at every meeting with 2 or 3 printed replicas and an iPad with pictures and videos of all our current projects. At one meeting where I had an impromptu show and tell of the printed Art Institute lion, staff from our marketing team prompted a discussion about the feasibility of creating take-home DIY mold-a-ramas! It was decided that for now, the elongated print time is still a barrier to satisfying a rushed crowd. But in structured programs, we can design around these constraints.

At the Art Institute, 3D scanning and printing remains, for now, a grass-roots enthusiasm of a small set of colleagues. I’m excited by how many ideas have already surfaced, but am certain that even more innovations will emerge as it becomes more mainstream at the museum.

Q – I know you’re a keen Arduino boffin too. What contraptions do you next want to make using both 3DPrinting and Arduino? Will we be seeing any at MCN?

This should be interesting since MCN will kick off with a combined 3DPrinting and Arduino workshop co-led by the Met’s Don Undeen and Miriam Langer from the New Mexico Highlands University. We will surely see some wonderfully creative chaos, which will build throughout the conference.

These workshops may seem a bit abstract at first glance from the daily work we do. I encourage everyone to embrace a maker project or workshop even if you can’t specifically pinpoint its relevance to your current projects. Getting your hands dirty in a creative project can bring and innovative mindset to e-publication, digital media and other engagement projects.

Sadly I won’t have time before MCN to produce an elaborate Arduino-driven Makerbot masterpiece. I’m currently dedicating my ‘project time’ to an overly ambitious installation artwork that incorporates Kinect, Arduino, Processing, servos, lights and sounds to address issues of balance…’

Adapted from an article by Seb Chan

http://www.freshandnew.org/2012/09/pulling-rabbit-mesh-hat-liz-neely-talks-3d-digitisation-3d-printing/