SlideShare a Scribd company logo
Kyle Banerjee
banerjek@ohsu.edu
Web Scraping Basics
The truth of the matter is...
Web scraping is one of the
worst ways to get data!
What’s wrong with scraping?
1. Slow, resources intensive, not scalable
2. Unreliable -- breaks when website
changes and works poorly with
responsive design techniques
3. Difficult to parse data
4. Harvest looks like an attack
5. Often prohibited by TOS
Before writing a scraper
Call!
● Explore better options
● Check terms of service
● Ask permission
● Can you afford scrape
errors?
Alternatives to scraping
1. Data dumps
2. API
3. Direct database connections
4. Shipping drives
5. Shared infrastructure
Many datasets are easy to retrieve
You can often export search results
Why scrape the Web?
1. Might be the only method available
2. Sometimes can get precombined or
preprocessed info that would otherwise
be hard to generate
Things to know
1. Web scraping is about parsing and
cleaning.
2. You don’t need to be a programmer, but
scripting experience is very helpful.
Don’t use Excel. Seriously.
Excel
● Mangles your data
○ Identifiers and numeric data at risk
● Cannot handle carriage returns in data
● Crashes with large files
● OpenRefine is a better tool for situations
where you think you need Excel
http://openrefine.org
Harvesting options
● Free utilities
● Purchased software
● DaaS (Data as a Service) -- hosted web
spidering
● Write your own
Watch out for spider traps!
● Web pages that intentionally or
unintentionally cause a crawler to make
an infinite number of requests
● No algorithm can detect all spider traps
Ask for help!
1. Methods described here are familiar to
almost all systems people
2. Domain experts can help you identify tools
and shortcuts that are especially relevant
to you
3. Bouncing ideas off *anyone* usually results
in a superior outcome
Handy skills
Skill Benefit
DOM Identify and extract data
Regular expressions Identify and extract data
Command line Process large files
Scripting
Automate repetitive tasks
Perform complex operations
Handy basic tools
Tool Benefit
Web scraping service Simplify data acquisition
cURL (command line)
Easily retrieve data using
APIs
wget (command line)
Recursively retrieve web
pages
OpenRefine Process and clean data
Power tools
Tool Benefit
grep, sed, awk, tr, paste
Select and transform data in
VERY large files quickly
jq Easily manipulate JSON
xml2json Convert XML to JSON
csvkit
Utilities to convert to and
work with CSV
scrape
HTML extraction using XPath
and CSS selectors
Web scraping, the easy way
● Hosted services allow you to easily target
specific structures and pages
● Programming experience unnecessary, but
helpful
● For unfamiliar problems, ask for help
Hosted example, Scrapinghub
Scrapinghub data output
Document Object Model (DOM)
● Programming interface for HTML and XML
documents
● Supported by many languages/environments
● Represents documents in a tree structure
● Used to directly access content
Document Object Model (DOM) Tree
/document/html/body/div/p = “text node”
XPath is a syntax for defining
parts of an XML document
The Swiss Army Knife of data
Regular Expressions
● Special strings that allow you to search
and replace based on patterns
● Supported in a wide variety of software
and all operating systems
Regular expressions can...
● Use logic, capitalization, edges of
words/lines, express ranges, use bits (or
all) of what you matched in replacements
● Convert free text into XML into delimited
text or codes and vice versa
● Find complex patterns using proximity
indicators and/or involving multiple lines
● Select preferred versions of fields
Quick Regular Expression Guide
^ Match the start of the line
$ Match the end of the line
. Match any single character
* Match zero or more of the previous character
[A-D,G-J,0-5]* [A-D,G-J,0-5]* = match zero or more of ABCDGHIJ012345
[^A-C] Match any one character that is NOT A,B, or C
(dog)
Match the word "dog", including case, and remember that text
to be used later in the match or replacement
1
Insert the first remembered text as if it were typed here (2 for
second, 3 for 3rd, etc.)

Use to match special characters.  matches a backslash, *
matches an asterisk, etc.
Data can contain weird problems
● XML metadata contained errors on every
field that contained an HTML entity (&
< > " ' etc)
<b>Oregon Health &amp</b>
<b> Science University</b>
● Error occurs in many fields scattered across
thousands of records
● But this can be fixed in seconds!
Regular expressions to the rescue!
● “Whenever a field ends in an HTML entity
minus the semicolon and is followed by an
identical field, join those into a single field and
fix the entity. Any line can begin with an
unknown number of tabs or spaces”
/^s*<([^>]+>)(.*)(&[a-z]+)</1ns*<1/<123;/
Confusing at first, but easier than you think!
● Works on all platforms and is built into a
lot of software (including Office)
● Ask for help! Programmers can help you
with syntax
● Let’s walk through our example which
involves matching and joining unknown
fields across multiple lines...
Regular Expression Analysis
/^s*<([^>]+>)(.*)(&[a-z]+)</1ns*<1/<123;/
^ Beginning of line
s*< Zero or more whitespace characters followed by “<”
([^>]+>) One or more characters that are not “>” followed by “>” (i.e.
a tag). Store in 1
(.*) Any characters to next part of pattern. Store in 2
(&[a-z]+) Ampersand followed by letters (HTML entities). Store in 3
</1n “</ followed by 1 (i.e. the closing tag) followed by a newline
s*<1 Any number of whitespace characters followed by tag 1
/<123;/ Replace everything up to this point with “<” followed by 1
(opening tag), 2 (field contents), 3, and “;” (fix HTML
entity). This effectively joins the fields
The command line
● Often the easiest way by far
● Process files of any size
● Combine the power of individual programs
in a single command (pipes)
● Supported by all major platforms
Getting started with the command line
● MacOS (use Terminal)
○ Install Homebrew
○ ‘brew install [package name]’
● Windows 10
○ Enable linux subsystem and go to bash terminal
○ ‘sudo apt-get install [package name]’
● Or install VirtualBox with linux
○ ‘sudo apt-get install [package name]’ from terminal
Learning the command line
● The power of pipes -- combine programs!
● Google solutions for specific problems --
there are many online examples
● Learn one command at a time. Don’t worry
about what you don’t need.
● Try, but give up fast. Ask linux geeks for
help.
Scripting is the command line!
● Simple text files that allow you to combine
utilities and programs written in any language
● No programming experience necessary
● Great for automating processes
● For unfamiliar problems, ask for help
wget
● A command line tool to retrieve data from web
servers
● Works on all operating systems
● Works with unstable connections
● Great for recursive downloads of data files
● Flexible. Can use patterns, specify depth, etc
wget example
wget --recursive ftp://157.98.192.110/ntp-cebs/datatype/microarray/HESI/
Filezilla is good for FTP using a GUI
cURL
● A tool to transfer data from or to a server
● Works with many protocols, can deal with
authentication
● Especially useful for APIs -- the preferred way
to download data using multiple transactions
Things that make life easier
1. JSON (JavaScript Object Notation)
2. XML (eXtensible Markup Language)
3. API (Application Programming Interface)
4. Specialized protocols
5. Using request headers to retrieve pages
that are easier to parse
There are only two kinds of data
1. Parseable
2. Unparseable
BUT
● Some structures are much easier to work
with than others
● Convert to whatever is easiest for the task
at hand
Generally speaking
● Strings
Easiest to work with, fastest, requires fewest resources,
greatest number of tools available.
● XML
Powerful but hardest to work with, slowest, requires
greatest number of resources, very inefficient for large files.
● JSON
Much more sophisticated access than strings, much easier
to work with than XML and requires fewer resources.
Awkward with certain data.
curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.json?di=04041346001043
JSON example
curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.xml?di=04041346001043
XML example
When processing large XML files
● Convert to JSON if possible, use string
based tools, or at least break the file into
smaller XML documents.
● DOM based tools such as XSLT must load
entire file into memory where it can take 10
times more space for processing
● If you need DOM based tools such XSLT,
break file into many chunks where each
record is its own document
Using APIs
● Most common type is REST (REpresentative
State Transfer) -- a fancy way of saying they
work like a Web form
● Normally have to transmit credentials or other
information. cURL is very good for this
How about Linked Data?
● Uses relationships to connect data
● Great for certain types of complex data
● You must have programming skills to download
and use these
● Often can be interacted with via API
● Can be flattened and manipulated using
traditional tools
grep
● Command line utility to select lines
matching a regular expression
● Very good for extracting just the data
you’re interested in
● Use with small or very large (terabytes)
files
sed
● Command line utility to select, parse, and
transform lines
● Great for “fixing” data so that it can be
used with other programs
● Extremely powerful and works great with
very large (terabytes) files
tr
● Command line utility to translate individual
characters from one to another
● Great for prepping data in files too large
to load into any program
● Particularly useful in combination with sed
for fixing large delimited files containing
line breaks within the data itself
paste
● Command line utility that prints
corresponding lines of files side by side
● Great for combining data from large files
● Also very handy for fixing data
Delimited file with bad line feeds
{myfile.txt}
a1,a2,a3,a4,a5
,a6
b1,b2,b3,b4
,b5,b6
c1,c2,c3,c4,c5,c6
d1
,d2,d3,d4,
d5,d6
Fixed in seconds!
tr "n" "," < myfile.txt | 
sed 's/,+/,/g' | tr "," "n" | paste -s -d",,,,,n"
a1,a2,a3,a4,a5,a6
b1,b2,b3,b4,b5,b6
c1,c2,c3,c4,c5,c6
d1,d2,d3,d4,d5,d6
The power of pipes!
Command Analysis
tr "n" "," < myfile.txt | sed 's/,+/,/g' | tr "," "n" |paste -s -d",,,,,n"
tr “n” “,” < myfile.txt Convert all newlines to commas
| sed ‘/s,+/,/g’ Pipe to sed, convert all multiple instances of
commas to a single comma. Sed step is
necessary because you don’t know how
many newlines are bogus or where they are
| tr “,” “n” Pipe to tr which converts all commas into
newlines
| paste -s -d “,,,,,”n” Pipe to paste command which converts
single column file to output 6 columns wide
using a comma as a delimiter terminated by
a newline
awk
● Outstanding for reading, transforming,
and creating data in rows and columns
● Complete pattern scanning language for
text, but typically used to transform the
output of other commands
Extract 2nd and 5th fields
a1 a2 a3 a4 a5 a6
b1 b2 b3 b4 b5 b6
c1 c2 c3 c4 c5 c6
d1 d2 d3 d4 d5 d6
awk '{print $2,$5}' myfile
a2 a5
b2 b5
c2 c5
d2 d5
{myfile}
jq
● Like sed, but optimized for JSON
● Includes logical and conditional operators,
variables, functions, and powerful features
● Very good for selecting, filtering, and
formatting more complex data
curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.json?di=04041346001043
JSON example
Extract deviceID if cuff detected
curl
https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.js
on?di=04041346001043 | jq '.gudid.device |
select(.brandName | test("cuff")) |
.identifiers.identifier.deviceId'
"04041346001043"
The power of pipes!
Don’t try to remember all this!
● Ask for help -- this stuff is easy
for linux geeks
● Google can help you with
commands/syntax
● Online forums are also helpful,
but don’t mind the trolls
If you want a GUI, use OpenRefine
http://openrefine.org
● Sophisticated, including regular
expression support
● Convert between different formats
● Up to a couple hundred thousand rows
● Even has clustering capabilities!
Web Scraping Basics
Normalization is more conceptual than technical
● Every situation is unique and depends on the
data you have and what you need
● Don’t fob off data analysis on technical
people who don’t understand your data
● It’s sometimes not possible to fix everything
Solutions are often domain specific!
● Data sources
● Challenges
● Tools
● Tricks
Questions?
Kyle Banerjee
banerjek@ohsu.edu

More Related Content

Web Scraping Basics

  • 1. Kyle Banerjee banerjek@ohsu.edu Web Scraping Basics
  • 2. The truth of the matter is... Web scraping is one of the worst ways to get data!
  • 3. What’s wrong with scraping? 1. Slow, resources intensive, not scalable 2. Unreliable -- breaks when website changes and works poorly with responsive design techniques 3. Difficult to parse data 4. Harvest looks like an attack 5. Often prohibited by TOS
  • 4. Before writing a scraper Call! ● Explore better options ● Check terms of service ● Ask permission ● Can you afford scrape errors?
  • 5. Alternatives to scraping 1. Data dumps 2. API 3. Direct database connections 4. Shipping drives 5. Shared infrastructure
  • 6. Many datasets are easy to retrieve
  • 7. You can often export search results
  • 8. Why scrape the Web? 1. Might be the only method available 2. Sometimes can get precombined or preprocessed info that would otherwise be hard to generate
  • 9. Things to know 1. Web scraping is about parsing and cleaning. 2. You don’t need to be a programmer, but scripting experience is very helpful.
  • 10. Don’t use Excel. Seriously.
  • 11. Excel ● Mangles your data ○ Identifiers and numeric data at risk ● Cannot handle carriage returns in data ● Crashes with large files ● OpenRefine is a better tool for situations where you think you need Excel http://openrefine.org
  • 12. Harvesting options ● Free utilities ● Purchased software ● DaaS (Data as a Service) -- hosted web spidering ● Write your own
  • 13. Watch out for spider traps! ● Web pages that intentionally or unintentionally cause a crawler to make an infinite number of requests ● No algorithm can detect all spider traps
  • 14. Ask for help! 1. Methods described here are familiar to almost all systems people 2. Domain experts can help you identify tools and shortcuts that are especially relevant to you 3. Bouncing ideas off *anyone* usually results in a superior outcome
  • 15. Handy skills Skill Benefit DOM Identify and extract data Regular expressions Identify and extract data Command line Process large files Scripting Automate repetitive tasks Perform complex operations
  • 16. Handy basic tools Tool Benefit Web scraping service Simplify data acquisition cURL (command line) Easily retrieve data using APIs wget (command line) Recursively retrieve web pages OpenRefine Process and clean data
  • 17. Power tools Tool Benefit grep, sed, awk, tr, paste Select and transform data in VERY large files quickly jq Easily manipulate JSON xml2json Convert XML to JSON csvkit Utilities to convert to and work with CSV scrape HTML extraction using XPath and CSS selectors
  • 18. Web scraping, the easy way ● Hosted services allow you to easily target specific structures and pages ● Programming experience unnecessary, but helpful ● For unfamiliar problems, ask for help
  • 19. Hosted example, Scrapinghub
  • 20. Scrapinghub data output
  • 21. Document Object Model (DOM) ● Programming interface for HTML and XML documents ● Supported by many languages/environments ● Represents documents in a tree structure ● Used to directly access content
  • 22. Document Object Model (DOM) Tree /document/html/body/div/p = “text node” XPath is a syntax for defining parts of an XML document
  • 23. The Swiss Army Knife of data Regular Expressions ● Special strings that allow you to search and replace based on patterns ● Supported in a wide variety of software and all operating systems
  • 24. Regular expressions can... ● Use logic, capitalization, edges of words/lines, express ranges, use bits (or all) of what you matched in replacements ● Convert free text into XML into delimited text or codes and vice versa ● Find complex patterns using proximity indicators and/or involving multiple lines ● Select preferred versions of fields
  • 25. Quick Regular Expression Guide ^ Match the start of the line $ Match the end of the line . Match any single character * Match zero or more of the previous character [A-D,G-J,0-5]* [A-D,G-J,0-5]* = match zero or more of ABCDGHIJ012345 [^A-C] Match any one character that is NOT A,B, or C (dog) Match the word "dog", including case, and remember that text to be used later in the match or replacement 1 Insert the first remembered text as if it were typed here (2 for second, 3 for 3rd, etc.) Use to match special characters. matches a backslash, * matches an asterisk, etc.
  • 26. Data can contain weird problems ● XML metadata contained errors on every field that contained an HTML entity (&amp; &lt; &gt; &quot; &apos; etc) <b>Oregon Health &amp</b> <b> Science University</b> ● Error occurs in many fields scattered across thousands of records ● But this can be fixed in seconds!
  • 27. Regular expressions to the rescue! ● “Whenever a field ends in an HTML entity minus the semicolon and is followed by an identical field, join those into a single field and fix the entity. Any line can begin with an unknown number of tabs or spaces” /^s*<([^>]+>)(.*)(&[a-z]+)</1ns*<1/<123;/
  • 28. Confusing at first, but easier than you think! ● Works on all platforms and is built into a lot of software (including Office) ● Ask for help! Programmers can help you with syntax ● Let’s walk through our example which involves matching and joining unknown fields across multiple lines...
  • 29. Regular Expression Analysis /^s*<([^>]+>)(.*)(&[a-z]+)</1ns*<1/<123;/ ^ Beginning of line s*< Zero or more whitespace characters followed by “<” ([^>]+>) One or more characters that are not “>” followed by “>” (i.e. a tag). Store in 1 (.*) Any characters to next part of pattern. Store in 2 (&[a-z]+) Ampersand followed by letters (HTML entities). Store in 3 </1n “</ followed by 1 (i.e. the closing tag) followed by a newline s*<1 Any number of whitespace characters followed by tag 1 /<123;/ Replace everything up to this point with “<” followed by 1 (opening tag), 2 (field contents), 3, and “;” (fix HTML entity). This effectively joins the fields
  • 30. The command line ● Often the easiest way by far ● Process files of any size ● Combine the power of individual programs in a single command (pipes) ● Supported by all major platforms
  • 31. Getting started with the command line ● MacOS (use Terminal) ○ Install Homebrew ○ ‘brew install [package name]’ ● Windows 10 ○ Enable linux subsystem and go to bash terminal ○ ‘sudo apt-get install [package name]’ ● Or install VirtualBox with linux ○ ‘sudo apt-get install [package name]’ from terminal
  • 32. Learning the command line ● The power of pipes -- combine programs! ● Google solutions for specific problems -- there are many online examples ● Learn one command at a time. Don’t worry about what you don’t need. ● Try, but give up fast. Ask linux geeks for help.
  • 33. Scripting is the command line! ● Simple text files that allow you to combine utilities and programs written in any language ● No programming experience necessary ● Great for automating processes ● For unfamiliar problems, ask for help
  • 34. wget ● A command line tool to retrieve data from web servers ● Works on all operating systems ● Works with unstable connections ● Great for recursive downloads of data files ● Flexible. Can use patterns, specify depth, etc
  • 35. wget example wget --recursive ftp://157.98.192.110/ntp-cebs/datatype/microarray/HESI/
  • 36. Filezilla is good for FTP using a GUI
  • 37. cURL ● A tool to transfer data from or to a server ● Works with many protocols, can deal with authentication ● Especially useful for APIs -- the preferred way to download data using multiple transactions
  • 38. Things that make life easier 1. JSON (JavaScript Object Notation) 2. XML (eXtensible Markup Language) 3. API (Application Programming Interface) 4. Specialized protocols 5. Using request headers to retrieve pages that are easier to parse
  • 39. There are only two kinds of data 1. Parseable 2. Unparseable BUT ● Some structures are much easier to work with than others ● Convert to whatever is easiest for the task at hand
  • 40. Generally speaking ● Strings Easiest to work with, fastest, requires fewest resources, greatest number of tools available. ● XML Powerful but hardest to work with, slowest, requires greatest number of resources, very inefficient for large files. ● JSON Much more sophisticated access than strings, much easier to work with than XML and requires fewer resources. Awkward with certain data.
  • 41. curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.json?di=04041346001043 JSON example
  • 42. curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.xml?di=04041346001043 XML example
  • 43. When processing large XML files ● Convert to JSON if possible, use string based tools, or at least break the file into smaller XML documents. ● DOM based tools such as XSLT must load entire file into memory where it can take 10 times more space for processing ● If you need DOM based tools such XSLT, break file into many chunks where each record is its own document
  • 44. Using APIs ● Most common type is REST (REpresentative State Transfer) -- a fancy way of saying they work like a Web form ● Normally have to transmit credentials or other information. cURL is very good for this
  • 45. How about Linked Data? ● Uses relationships to connect data ● Great for certain types of complex data ● You must have programming skills to download and use these ● Often can be interacted with via API ● Can be flattened and manipulated using traditional tools
  • 46. grep ● Command line utility to select lines matching a regular expression ● Very good for extracting just the data you’re interested in ● Use with small or very large (terabytes) files
  • 47. sed ● Command line utility to select, parse, and transform lines ● Great for “fixing” data so that it can be used with other programs ● Extremely powerful and works great with very large (terabytes) files
  • 48. tr ● Command line utility to translate individual characters from one to another ● Great for prepping data in files too large to load into any program ● Particularly useful in combination with sed for fixing large delimited files containing line breaks within the data itself
  • 49. paste ● Command line utility that prints corresponding lines of files side by side ● Great for combining data from large files ● Also very handy for fixing data
  • 50. Delimited file with bad line feeds {myfile.txt} a1,a2,a3,a4,a5 ,a6 b1,b2,b3,b4 ,b5,b6 c1,c2,c3,c4,c5,c6 d1 ,d2,d3,d4, d5,d6
  • 51. Fixed in seconds! tr "n" "," < myfile.txt | sed 's/,+/,/g' | tr "," "n" | paste -s -d",,,,,n" a1,a2,a3,a4,a5,a6 b1,b2,b3,b4,b5,b6 c1,c2,c3,c4,c5,c6 d1,d2,d3,d4,d5,d6 The power of pipes!
  • 52. Command Analysis tr "n" "," < myfile.txt | sed 's/,+/,/g' | tr "," "n" |paste -s -d",,,,,n" tr “n” “,” < myfile.txt Convert all newlines to commas | sed ‘/s,+/,/g’ Pipe to sed, convert all multiple instances of commas to a single comma. Sed step is necessary because you don’t know how many newlines are bogus or where they are | tr “,” “n” Pipe to tr which converts all commas into newlines | paste -s -d “,,,,,”n” Pipe to paste command which converts single column file to output 6 columns wide using a comma as a delimiter terminated by a newline
  • 53. awk ● Outstanding for reading, transforming, and creating data in rows and columns ● Complete pattern scanning language for text, but typically used to transform the output of other commands
  • 54. Extract 2nd and 5th fields a1 a2 a3 a4 a5 a6 b1 b2 b3 b4 b5 b6 c1 c2 c3 c4 c5 c6 d1 d2 d3 d4 d5 d6 awk '{print $2,$5}' myfile a2 a5 b2 b5 c2 c5 d2 d5 {myfile}
  • 55. jq ● Like sed, but optimized for JSON ● Includes logical and conditional operators, variables, functions, and powerful features ● Very good for selecting, filtering, and formatting more complex data
  • 56. curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.json?di=04041346001043 JSON example
  • 57. Extract deviceID if cuff detected curl https://accessgudid.nlm.nih.gov/api/v1/devices/lookup.js on?di=04041346001043 | jq '.gudid.device | select(.brandName | test("cuff")) | .identifiers.identifier.deviceId' "04041346001043" The power of pipes!
  • 58. Don’t try to remember all this! ● Ask for help -- this stuff is easy for linux geeks ● Google can help you with commands/syntax ● Online forums are also helpful, but don’t mind the trolls
  • 59. If you want a GUI, use OpenRefine http://openrefine.org ● Sophisticated, including regular expression support ● Convert between different formats ● Up to a couple hundred thousand rows ● Even has clustering capabilities!
  • 61. Normalization is more conceptual than technical ● Every situation is unique and depends on the data you have and what you need ● Don’t fob off data analysis on technical people who don’t understand your data ● It’s sometimes not possible to fix everything
  • 62. Solutions are often domain specific! ● Data sources ● Challenges ● Tools ● Tricks
  • 63. Questions? Kyle Banerjee banerjek@ohsu.edu
© 2024 SlideShare from Scribd

玻璃钢生产厂家吉林公园玻璃钢雕塑多少钱虹口区进口玻璃钢雕塑推荐德阳成都商场美陈市场贵州公园玻璃钢雕塑销售厂家三门峡玻璃钢雕塑雕刻厂家电话芜湖商场国庆美陈青岛玻璃钢花盆市场玻璃钢雕塑深受热捧永济玻璃钢景观雕塑玻璃钢雕塑多少钱苏州湘西玻璃钢卡通雕塑黄山动物玻璃钢雕塑批发宜宾玻璃钢造型雕塑云南人物玻璃钢雕塑定做价格吉林户外玻璃钢雕塑定制江苏蔬菜玻璃钢雕塑加工广州南沙玻璃钢雕塑黑河公园玻璃钢雕塑公司崇左玻璃钢商场美陈陕西景区玻璃钢雕塑哪家便宜绍兴商场美陈批发价淮安玻璃钢雕塑安装四川玻璃钢卡通雕塑厂家淮北动物玻璃钢雕塑多少钱常州港粤玻璃钢座椅雕塑玻璃钢人物雕塑文字介绍商场美陈 英文翻译上海镜面玻璃钢雕塑厂家直供图案玻璃钢雕塑阳江动物雕塑玻璃钢图片香港通过《维护国家安全条例》两大学生合买彩票中奖一人不认账让美丽中国“从细节出发”19岁小伙救下5人后溺亡 多方发声单亲妈妈陷入热恋 14岁儿子报警汪小菲曝离婚始末遭遇山火的松茸之乡雅江山火三名扑火人员牺牲系谣言何赛飞追着代拍打萧美琴窜访捷克 外交部回应卫健委通报少年有偿捐血浆16次猝死手机成瘾是影响睡眠质量重要因素高校汽车撞人致3死16伤 司机系学生315晚会后胖东来又人满为患了小米汽车超级工厂正式揭幕中国拥有亿元资产的家庭达13.3万户周杰伦一审败诉网易男孩8年未见母亲被告知被遗忘许家印被限制高消费饲养员用铁锨驱打大熊猫被辞退男子被猫抓伤后确诊“猫抓病”特朗普无法缴纳4.54亿美元罚金倪萍分享减重40斤方法联合利华开始重组张家界的山上“长”满了韩国人?张立群任西安交通大学校长杨倩无缘巴黎奥运“重生之我在北大当嫡校长”黑马情侣提车了专访95后高颜值猪保姆考生莫言也上北大硕士复试名单了网友洛杉矶偶遇贾玲专家建议不必谈骨泥色变沉迷短剧的人就像掉进了杀猪盘奥巴马现身唐宁街 黑色着装引猜测七年后宇文玥被薅头发捞上岸事业单位女子向同事水杯投不明物质凯特王妃现身!外出购物视频曝光河南驻马店通报西平中学跳楼事件王树国卸任西安交大校长 师生送别恒大被罚41.75亿到底怎么缴男子被流浪猫绊倒 投喂者赔24万房客欠租失踪 房东直发愁西双版纳热带植物园回应蜉蝣大爆发钱人豪晒法院裁定实锤抄袭外国人感慨凌晨的中国很安全胖东来员工每周单休无小长假白宫:哈马斯三号人物被杀测试车高速逃费 小米:已补缴老人退休金被冒领16年 金额超20万

玻璃钢生产厂家 XML地图 TXT地图 虚拟主机 SEO 网站制作 网站优化