# Screenshot-to-code
**Repository Path**: wungshihao/Screenshot-to-code
## Basic Information
- **Project Name**: Screenshot-to-code
- **Description**: 同步GitHub库,Screenshot-to-code
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 3
- **Forks**: 0
- **Created**: 2023-11-22
- **Last Updated**: 2024-07-23
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
---
**A detailed tutorial covering the code in this repository:** [Turning design mockups into code with deep learning](https://emilwallner.medium.com/how-you-can-train-an-ai-to-convert-your-design-mockups-into-html-and-css-cc7afd82fed4).
**Plug:** 👉 Check out my 60-page guide, [No ML Degree](https://twitter.com/EmilWallner/status/1528961488206979072), on how to land a machine learning job without a degree.
The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.
The models are based on Tony Beltramelli's [pix2code](https://github.com/tonybeltramelli/pix2code), and inspired by Airbnb's [sketching interfaces](https://airbnb.design/sketching-interfaces/), and Harvard's [im2markup](https://github.com/harvardnlp/im2markup).
**Note:** only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy. The best model uses a GRU instead of an LSTM. This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train. The current model is also trained on a homogeneous and small dataset, thus it's hard to tell how well it behaves on more complex layouts.
A quick overview of the process:
### 1) Give a design image to the trained neural network

### 2) The neural network converts the image into HTML markup
### 3) Rendered output

## Installation
### FloydHub
[](https://floydhub.com/run?template=https://github.com/floydhub/pix2code-template)
Click this button to open a [Workspace](https://blog.floydhub.com/workspaces/) on [FloydHub](https://www.floydhub.com/?utm_medium=readme&utm_source=pix2code&utm_campaign=aug_2018) where you will find the same environment and dataset used for the *Bootstrap version*. You can also find the trained models for testing.
### Local
``` bash
pip install keras tensorflow pillow h5py jupyter
```
```
git clone https://github.com/emilwallner/Screenshot-to-code.git
cd Screenshot-to-code/
jupyter notebook
```
Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all
The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct ```dir_name```.
## Folder structure
``` bash
| |-Bootstrap #The Bootstrap version
| | |-compiler #A compiler to turn the tokens to HTML/CSS (by pix2code)
| | |-resources
| | | |-eval_light #10 test images and markup
| |-Hello_world #The Hello World version
| |-HTML #The HTML version
| | |-Resources_for_index_file #CSS,images and scripts to test index.html file
| | |-html #HTML files to train it on
| | |-images #Screenshots for training
|-readme_images #Images for the readme page
```
## Hello World