CSS Flex-用Flex替换内联显示

CSS Flex-用Flex替换内联显示,css,sass,flexbox,Css,Sass,Flexbox,我创建了一个布局,并通过内联显示进行了良好的布局。因为,我想使用flex box,但我看不出是否正确 HTML <div class="content-section"> <div class="box"> <div class="current medium"> <div class="box-tool-bar"> <span class="title"></s

我创建了一个布局,并通过内联显示进行了良好的布局。因为,我想使用flex box,但我看不出是否正确

HTML

<div class="content-section">
   <div class="box">
      <div class="current medium">
         <div class="box-tool-bar">   
            <span class="title"></span>         
            <span class="controls">             
            <i class="fa fa-window-minimize"></i>       
            <i class="fa fa-window-restore"></i>       
            <i class="fa fa-window-maximize" ></i>           
            <i class="fa fa-window-close" ></i>     
            </span>    
         </div>
         <div class="content">   
            The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.
            And then everything went crazy.
            As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.
            Scaling to ∞ services
            Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.
            The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.
            Sandboxing in the browser
            To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.
            To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:
            Defining parts, blades, and commands
            Customizing the UI of parts
            Binding data into UI elements
            Sending notifications
            The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.
            Distributed continuous deployment
            In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.
            IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().
            The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.
            And then everything went crazy.
            As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.
            Scaling to ∞ services
            Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.
            The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.
            Sandboxing in the browser
            To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.
            To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:
            Defining parts, blades, and commands
            Customizing the UI of parts
            Binding data into UI elements
            Sending notifications
            The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.
            Distributed continuous deployment
            In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.
            IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().
            The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.
            And then everything went crazy.
            As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.
            Scaling to ∞ services
            Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.
            The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.
            Sandboxing in the browser
            To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.
            To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:
            Defining parts, blades, and commands
            Customizing the UI of parts
            Binding data into UI elements
            Sending notifications
            The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.
            Distributed continuous deployment
            In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.
            IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().
            The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.
            And then everything went crazy.
            As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.
            Scaling to ∞ services
            Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.
            The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.
            Sandboxing in the browser
            To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.
            To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:
            Defining parts, blades, and commands
            Customizing the UI of parts
            Binding data into UI elements
            Sending notifications
            The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.
            Distributed continuous deployment
            In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.
            IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().
            The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.
            And then everything went crazy.
            As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.
            Scaling to ∞ services
            Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.
            The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.
            Sandboxing in the browser
            To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.
            To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:
            Defining parts, blades, and commands
            Customizing the UI of parts
            Binding data into UI elements
            Sending notifications
            The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.
            Distributed continuous deployment
            In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.
            IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().
         </div>
      </div>
      <div class="next">
         <div class="box">
            <div class="current large">
               <div class="box-tool-bar">    
                  <span class="title"></span>           
                  <span class="controls">                  
                  <i class="fa fa-window-minimize"></i>                  
                  <i class="fa fa-window-restore"></i>                  
                  <i class="fa fa-window-maximize"></i>                 
                  <i class="fa fa-window-close"></i>            
                  </span>          
               </div>
               <div class="content">   The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work. And then everything went crazy. As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The    </div>
            </div>
            <div class="next">
               <div class="box">
                  <div class="current medium" >
                     <div class="box-tool-bar">  
                        <span class="title" ></span>      
                        <span class="controls"> 
                        <i class="fa fa-window-minimize" ></i>        
                        <i class="fa fa-window-restore" ></i>         
                        <i class="fa fa-window-maximize" ></i>              
                        <i class="fa fa-window-close" ></i>          
                        </span>      
                     </div>
                     <div class="content">   The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work. And then everything went crazy. As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The    </div>
                  </div>
                  <div class="next"></div>
               </div>
            </div>
         </div>
      </div>
   </div>
</div>

因此,您只需要相同但块之间没有间距?首先要做的可能是检查html(插入、打开/关闭标记等)的有效性。浏览器将尝试更正一些错误。如果太多,样式设置可能会以不同的方式从一个浏览器失败到另一个浏览器。首先验证你的结构(或树的内部结构),然后看看样式哪里出了问题,如果它仍然存在:)我看到一些10px的边距,如果这是你想要的差距reduce@GerardReches是的…你也会注意到这个孩子可以有无限多个不同宽度的孩子…@GCyrillus,所有的html格式都是正确的…好的,flex或inline块可能没有多大区别,每个框都是彼此的子框,而不是同级框,如果这是您想要的结构,那么当然可以:)所以您只需要相同的块,但块之间没有间距?首先要做的是检查html的有效性(插入、打开/关闭标记,…)浏览器将尝试更正某些错误。如果太多,样式设置可能会以不同的方式从一个浏览器失败到另一个浏览器。首先验证你的结构(或树的内部结构),然后看看样式哪里出了问题,如果它仍然存在:)我看到一些10px的边距,如果这是你想要的差距reduce@GerardReches是的…你也会注意到这个孩子可以有无限多个不同宽度的孩子…@GCyrillus,所有的html格式都是正确的…好的,flex或inline块可能没有多大区别,每个框都是彼此的子框,而不是同级框,如果这是您想要的结构,那么当然可以:)
.content-section {
    position: relative;
    width: 83%;
    height: 500px;
    overflow-x: auto;
    overflow-y: hidden;
    white-space: nowrap;
    background-color: #444;    

    .box {
        width: auto;
        height: 100%;
        margin-right: 10px;
        display: inline-block;

        .current {
            display: inline-block;
            height: 100%;

            &.small {
                width: 200px;
            }

            &.medium {
                width: 400px;
            }

            &.large {
                width: 600px;
            }
        }

        .next {
            width: auto;
            height: 100%;
            display: inline-block;
        }

        .box-tool-bar {
            height: 19px;
            position: relative;
            display: table;
            width: 100%;
            background-color: #ffffff;
            border-bottom: 1px solid #333333;

            .controls {
                display: table-cell;
                float: right;

                i {
                    font-size: 16px;
                    cursor: pointer;
                }
            }

            .title {
                display: table-cell;
                float: left;
            }
        }

        .content {
            height: calc(100% - 20px);
            overflow-y: auto;
            overflow-x: hidden;
            white-space: normal;
            background-color: #ffffff;
            font-size: 15px;
            color: #333333;
            line-height: 24px;
            padding-top: 10px;
        }
        /*&:first-child {
            margin-left: 10px;
        }*/
        &:last-child {
            margin-right: 30%;
        }
    }
}